Snubbed by media reports over its inability to tackle hate speech, Facebook has now claimed that the prevalence of hate speech on its platform has dropped by almost 50 percent in the last three quarters.
The claim came in response to a report in The Wall Street Journal (WSJ) on Sunday, which said that Facebook’s content moderators are not consistently successful at removing objectionable content using Artificial Intelligence (AI).
Advertisement
In a reply, Guy Rosen, Vice-President of Integrity at Facebook, said that their technology is having a big impact on reducing how much hate speech people see on Facebook.
“According to our latest Community Standards Enforcement report, its prevalence is about 0.05 percent of content viewed or about five views per every 10,000, down by almost 50 percent in the last three quarters,” he added.
“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true,” Rosen said.
The WSJ report claimed that internal documents show that two years ago, Facebook reduced the time that human reviewers focused on hate speech complaints, and made other adjustments that reduced the number of complaints.
“That, in turn, helped create the appearance that Facebook’s AI had been more successful in enforcing the company’s rules than it actually was,” the report said.
Rosen said in a blog post that focusing just on content removals is the wrong way to look at how we fight hate speech.
“We need to be confident that something is hate speech before we remove it. If something might be hate speech but we’re not confident enough that it meets the bar for removal, our technology may reduce the content’s distribution or won’t recommend groups, pages, or people that regularly post content that is likely to violate our policies,” he noted.
Facebook said when it began reporting metrics on hate speech, only 23.6 percent of content it removed was detected proactively by its systems; the majority of what it removed was found by people.
“Now that number is more than 97 percent. But our proactive rate doesn’t tell us what we are missing and doesn’t account for the sum of our efforts, including what we do to reduce the distribution of problematic content,” the Facebook Executive said.