Logo

Logo

Robots may tackle extremist videos better

Google has said that its artificial intelligence (AI)-driven robots are more accurate than humans in identifying and blocking the extremist…

Robots may tackle extremist videos better

(Getty Images)

Google has said that its artificial intelligence (AI)-driven robots are more accurate than humans in identifying and blocking the extremist videos online.

According to a report in The Telegraph on Tuesday, Google claimed that its robots flag three in four offensive videos from YouTube even before they are reported by users. 

Advertisement

"With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge," the report quoted Google as saying. 

Advertisement

"But over the past month, our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we've taken this kind of content down," the company added. 

The statement from Google comes after major companies including Marks and Spencer and McDonald cut off its business from YouTube earlier in 2017 when their ad videos had appeared alongside extremist videos.

The AI-driven bots at Google which are meant to identify the offensive content have identified more than 75 per cent of videos removed from YouTube in July. 

The company sees this ability of bots as "dramatic" and better than humans.

"While these tools aren't perfect, and aren't right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed," said Google. 

Facebook is also using AI to remove terrorist propaganda and related content, using image recognition software and video technology.

Social media companies and other Internet companies have been under scanner for doing little to curb the spread of violence and terror-related content on their platforms. 

To placate its critics, who accuse Google of letting YouTube breed terrorist groups on its platform, the company is reportedly hiring more people to review and enforce its policies on the video sharing site.

Advertisement