Your Public Radio > WYPR Archive
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
You are now viewing the WYPR Archive of content news. For the latest from WYPR, visit www.wypr.org.

Facebook Says It's Doing A Better Job Of Catching Hate Speech Before Users See It

Facebook is under pressure to to do more to remove hate speech, misinformation and other harmful content from its platform.
Ben Margot
/
AP
Facebook is under pressure to to do more to remove hate speech, misinformation and other harmful content from its platform.

Updated at 5:19 p.m. ET

Facebook users saw hate speech about once in every 1,000 pieces of content they viewed on the social network between July and September, the company said on Thursday.

This is the first time Facebook has publicly estimated the prevalence of hate speech on its platform, giving a sense of scale of the problem. It published the new metric as part of its quarterly report on how much content it removed from Facebook and Instagram for breaking rules ranging from violence to child exploitation to suicide and self-harm.

"The real question is, 'What do we not catch? What do we miss?'" said Guy Rosen, Facebook vice president of integrity, on a call with reporters. "It's why we consider [prevalence] to be the most important measure."

He compared prevalence to an "air quality test," where Facebook samples content to determine "the concentration of pollutants" — what percentage of posts break its rules. He said Facebook was focusing on how much hate speech was viewed, not how much was posted, because some posts reach few people while others go viral.

Facebook plans to submit its metrics to independent auditors next year, Rosen added.

From July to September, Facebook said it removed 22.1 million pieces of hate speech content from its namesake platform and 6.5 million pieces from Instagram. It said 95% of that content was proactively identified and removed before users reported it.

When Facebook first reported how much hate speech it removed, in the fourth quarter of 2017, just 24% was proactively found. The company said that improvement was due to advances in artificial intelligence technology and greater detection across different languages.

But while Facebook said the percentages show it's improving how it handles hate speech, civil rights groups say it must do even more.

A spokesman for the Anti-Defamation League said the report "raises some alarming concerns" about the sheer amount of hate speech on the platform, as evidenced from the millions of posts it had to remove.

He called on Facebook to disclose how many pieces of content users report as hate speech, whether or not they are removed. "That statistic is important because there are many forms of hate speech on Facebook that are not being removed, even after it's flagged," he said.

Vanita Gupta, CEO of the Leadership Conference on Civil and Human Rights, said it was "encouraging" that Facebook was taking steps to address hateful content but said the company must do more. "We've seen far too often that virtual content can result in real-world violence," she said.

Critics of Facebook have ramped up pressure on the company to more aggressively tackle hate speech, misinformation and other harmful content. This summer, advertisers boycotted spending on Facebook. In July, independent auditors commissioned by Facebook slammed the company for "vexing and heartbreaking decisions" that, they said, prioritized free speech over other values.

Facebook has been leaning more heavily on automated systems to flag violating content during the COVID-19 pandemic, because it has sent home most of its 15,000 content moderators, the majority of whom are contractors, not Facebook employees. The company says this change has hurt its ability to effectively remove harmful posts.

"Our human review capabilities are still less than what they were pre-pandemic and we're continuing to rely heavily on AI," Rosen said on Thursday.

On Wednesday, more than 200 Facebook workers sent an open letter to executives accusing the company of putting content moderators' lives at risk by pressuring them to return to the office. They claim the company's attempt to shift more moderation work to technology has "failed."

Rosen said Facebook still relies on human reviewers for the most serious content, such as suicide and child nudity. Some sensitive content cannot be reviewed by workers at home, so the company has transferred some of that work to regular Facebook employees. He acknowledged Facebook is now bringing some workers back into offices, with "very strict safety standards."

"We've made progress as a combination, both of increased availability of the workforce and increased progress on our AI systems," Rosen said. Automated systems can help reviewers prioritize content, and it can apply the decisions made by the reviewers to find and, if warranted, remove similar posts, he said.

On Thursday Facebook also shared new data on its efforts to curb misinformation about the 2020 election and COVID-19.

It removed more than 265,000 pieces of content from Facebook and Instagram in the US for violating rules against voter interference between March 1 and Election Day.

It put warning labels on 180 million pieces of content that were debunked by independent fact checkers. Rosen says when those labels are applied, 95% of people do not click through them to see the debunked content.

Between March and October, Facebook removed more than 12 million pieces of content across the platforms that contained misinformation about the coronavirus that could lead to imminent harm, such as hoax cures. It applied warnings on 167 million pieces of content debunked by fact checkers.

Editor's note:Facebook is among NPR's financial supporters.

Copyright 2020 NPR. To see more, visit https://www.npr.org.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.