Facing the uphill task of tackling election-related interference on its platform as India gets ready for polls next year, Facebook on Saturday said it is establishing a task force comprising “hundreds of people” in the country to prevent bad actors from abusing its platform.
“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” Richard Allan, Facebook’s Vice President for Global Policy Solutions, told the media here.
Advertisement
Facebook has also set a goal of bringing a transparency feature for political ads — now available in the US and Brazil — to India by March next year, Allan informed.
With the new ad architecture in place, people would be able to see who paid for a particular political ad.
In May this year, Facebook announced that all election-related ads on Facebook and Instagram in the US must be clearly labelled — including a “Paid for by” disclosure from the advertiser at the top of the ad.
When users click on the label, they would be taken to an archive with more information such as the campaign budget associated with an individual ad and how many people saw it – including their age, location and gender, Facebook had said.
The social media giant later introduced the transparency feature in Brazil.
The introduction of the same feature in India would help users identify political propaganda easily.
“The task force for India will have security specialists and content specialists, among others, who will try to understand all the possible forms of election-related abuse in India,” added Allan during a workshop on Facebook’s “community standards” in the capital.
Allan explained that while the disinformation linked to real-world violence is checked by the team mandated to maintain Facebook’s community standards, other forms of disinformation are handled by a different team of fact checkers.
“The challenge for the task force in India would be to distinguish between real political news and political propaganda,” Allan noted, adding that the team would be very much based in the country and would consist of both existing human resources working on these issues within the company and new recruits.
Facebook came under intense scrutiny of policy makers in the US after allegations of Russia-linked accounts using the social networking platform to spread divisive messages during the 2016 presidential election surfaced.
Since then, it has stepped up efforts to check abuse of its platform by bringing in more transparency in the conduct of its businesses, including in advertisement policies.
Echoing Facebook CEO Mark Zuckerberg’s earlier comments on elections across the world, Allan said the social media platform “wants to help countries around the world, including India, to conduct free and fair elections”.
In April, Zuckerberg said Facebook will ensure that its platform is not misused to influence elections in India and elsewhere.
“Our goals are to understand Facebook’s impact on upcoming elections — like Brazil, India, Mexico and the US midterms — and to inform our future product and policy decisions,” he told US lawmakers during a hearing.
Facebook uses a combination of technology, including Machine Learning (ML) and Artificial Intelligence (AI), and reports from its community to identify violating content on the platform.
The reports are reviewed by members of its “Community Operations” team who review content in over 50 languages in the world, including 12 from India.
“By the end of 2018, we will have 20,000 people working on these issues, double the number we had at the same time last year,” he said.
“We are also working to enhance the work we do to proactively detect violating content,” Allan said.
Speaking at the 16th Hindustan Times Leadership Summit here later in the day, Allan said Facebook was cooperating fully with the investigating agency (Central Bureau of Investigation) in India with regard to the Cambridge Analytica data leak scandal.