On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.
A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrated the Sept. 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviation used for the Nazi salute “Heil Hitler.”
The Instagram posts demonstrated a stark reality. Over the last 10 years, Silicon Valley’s social media companies have expanded their reach and influence to the furthest corners of the world. But it has become glaringly apparent that the companies never quite understood the negative consequences of that influence nor what to do about it — and that they cannot put the genie back in the bottle.
“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”
The repercussions of the social media companies’ inability to handle disinformation and hate speech have manifested themselves abundantly in recent days. Cesar Sayoc Jr., who was charged last week with sending explosive devices to prominent Democrats, appeared to have been radicalized online by partisan posts on Twitter and Facebook. Robert D. Bowers, who is accused of killing 11 people at the Tree of Life synagogue in Pittsburgh on Saturday, posted about his hatred of Jews on Gab, a two-year-old social network.
A memorial outside the Tree of Life synagogue. Robert D. Bowers, who killed 11 people at the synagogue, posted about his hatred of Jews on Gab, a two-year-old social network.CreditMichael Henninger for The New York Times
The effects of social media were also evident globally. Close watchers of Brazil’s election on Sunday ascribed much of the appeal of the victor, the far-right populist Jair Bolsonaro, to what unfolded on social media there. Interests tied to Mr. Bolsonaro’s campaign appeared to have flooded WhatsApp, the messaging application owned by Facebook, with a deluge of political content that gave wrong information on voting locations and times, provided false instructions on how to vote for particular candidates and outright disparaged one of Mr. Bolsonaro’s main opponents, Fernando Haddad.
Elsewhere, high-ranking members of the Myanmar military have used doctored messages on Facebook to foment anxiety and fear against the Muslim Rohingya minority group. And in India, fake stories on WhatsApp about child kidnappings led mobs to murder more than a dozen people this year.
“Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream,” said Jonathan A. Greenblatt, chief executive of the Anti-Defamation League, a nongovernmental organization that combats hate speech. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”
Facebook said it was investigating the anti-Semitic hashtags on Instagram after The New York Times flagged them. Sarah Pollack, a Facebook spokeswoman, said in a statement that Instagram was seeing new posts related to the shooting on Saturday and that it was “actively reviewing hashtags and content related to these events and removing content that violates our policies.”
YouTube said it has strict policies prohibiting content that promotes hatred or incites violence and added that it takes down videos that violate those rules.
Jair Bolsonaro, the far-right populist, was elected president of Brazil on Sunday. Close watchers of the election ascribed much of his appeal to social media.CreditRicardo Moraes/Reuters
Social media companies have said that identifying and removing hate speech and disinformation — or even defining what constitutes such content — is difficult. Facebook said this year that only 38 percent of hate speech on its site was flagged by its internal systems. In contrast, its systems pinpointed and took down 96 percent of what it defined as adult nudity, and 99.5 percent of terrorist content.
YouTube said users reported nearly 10 million videos from April to June for potentially violating its community guidelines. Just under one million of those videos were found to have broken the rules and were removed, according to the company’s data. YouTube’s automated detection tools also took down an additional 6.8 million videos in that period.
A study by researchers from M.I.T. that was published in March found that falsehoods on Twitter were 70 percent more likely to be retweeted than accurate news.
Facebook, Twitter and YouTube have all announced plans to invest heavily in artificial intelligence and other technology aimed at finding and removing unwanted content from their sites. Facebook has also said it would hire 10,000 additional people to work on safety and security issues, and YouTube has said that it planned to have 10,000 people dedicated to reviewing videos. Jack Dorsey, Twitter’s chief executive, recently said that although the company’s longtime principle was free expression, it was discussing how “safety should come first.”
But even as the companies throw money and resources at the problems, some of their employees said on Monday that they were rethinking whether the social media services could have a positive effect.
At Twitter, for example, employees are increasingly concerned that the company is floundering in its treatment of toxic language and hate speech, said four current and former employees who asked for anonymity because they had signed nondisclosure agreements.
The employees said their uncertainty surfaced in August, when Apple and other companies erased most of the posts and videos on their services from Alex Jones, the conspiracy theorist and founder of the right-wing site Infowars — but Twitter did not. (Twitter only followed suit weeks later.) Saturday’s shooting at the Pittsburgh synagogue led employees to urge Twitter’s leadership to firm up a policy on how to deal with hate speech and white supremacist content, two of the people said.
Twitter did not address questions about its employee concerns on Monday, but said it needed to be “thoughtful and considered” in its policies.
“Progress in this space is tough but we’ve never been as committed and as focused in our efforts,” Twitter said. “Serving public conversation and trying to make it healthier is our singular mission here.”
Instagram, which was created as a site for people to share curated photos of their food, adorable pets and cute children, has largely avoided scrutiny over disinformation and hate content — especially when compared with its parent, Facebook. But social media researchers said that the site had over the last year become more of a hotbed for hateful posts and videos meant to provoke discord.
That was evident after the Pittsburgh synagogue shooting, with the mushrooming of new anti-Semitic content on the site. On Sunday, one new video added to Instagram claimed that the state of Israel was created by the Rothschilds, a wealthy Jewish family. Underneath the video, the hashtags read #conspiracy and #jewworldorder.
By late Monday, it had been viewed more than 1,640 times and shared to other social media sites, including Twitter and Facebook.