Facebook has announced a new policy that allows it to take out networks of accounts engaging in “coordinated social harm”. The company said the change could help the platform fight harmful behaviour it wouldn’t otherwise be able to fully address under its existing rules, reports Engadget.
Unlike “coordinated inauthentic behaviour”, which is Facebook’s policy for dealing with harm that comes from networks of fake accounts, coordinated social harm gives the company a framework to address harmful actions from legitimate accounts, the report said.
Advertisement
During a call with reporters, the company’s head of security policy Nathaniel Gleicher said the policy is necessary because bad actors are increasingly trying to “blur the lines” between authentic and inauthentic behaviour, the report said.
“We are seeing groups that pose a risk of significant social harm, that also engage in violations on our platform, but don’t necessarily rise to the level for either of those where we’d enforce against for inauthenticity under CIB (coordinated inauthentic behaviour) or under our dangerous organisation’s policy,” Gleicher said.
“So this protocol is designed to capture these groups that are sort of in-between spaces.”
Gleicher mentioned that the new protocols could help Facebook address networks of accounts spreading anti-vaccine misinformation or groups trying to organize political violence.
Facebook said it could take “a range of actions” in enforcing its new rules around coordinated social harm. That could include banning accounts — as it did with the “Querdenken” movement — or throttling their reach to prevent content from spreading as widely.
The issue of how to handle groups that break Facebook’s rules in a coordinated way has been a difficult one for the company, which up until now has primarily focused on taking down networks that rely on fake accounts to manipulate its platform. The issue came up earlier this year following the January 6th insurrection as Facebook investigated the “Stop the Steal” movement.