Advertisment

YouTube using AI to spot extremist videos even before they are flagged

author-image
CIOL Writers
New Update
Youtube makes live streaming even more easier for desktop and mobile users

Google-owned YouTube has revealed that it flags more than 80 percent of the extremist videos uploaded on the website. The violent videos were flagged by its new spam-fighting artificial intelligence tools. The Google-owned company began applying machine learning algorithms to its videos in June so that it could quickly spot hateful content and flag it to human reviewers.

Advertisment

The company wrote in a blog post, “Always used a mix of human flagging and human review together with technology” to help it spot violent content. The program introduced in June added machine learning to flag violent extremist content, which would then be reviewed by humans." YouTube said that over the last month, 83 percent of violent extremist videos it removed was spotted without a human flag, up 8 percent from August.

The algorithms work by crawling YouTube looking for various signals, including tags, titles, images and color schemes, pulling in content that they think is potentially problematic. This is then escalated to human reviewers, who look at nuance and apply their judgment to identify if the content is intending to glorify violence or is just documenting it.

The extremist identifying video technology is a big part of the major corporations in the US to address widespread criticism by US and European governments that they should be doing more to tackle violent propaganda. Last month, Twitter took down nearly 3,00,000 accounts promoting terrorism from its website.

The blog post notes, “We will continue to heavily invest to fight the spread of this content, provide updates to governments, and collaborate with other companies through the Global Internet Forum to Counter Terrorism.”

youtube