Facebook and YouTube are among the few web sites that have tacitly begun to use automation to remove extremist content from their sites, Reuters reported on Saturday.
The move is a major step forward for internet companies that are eager to eradicate violent propaganda from their sites and are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States.
The technology that was originally developed to identify and remove copyright-protected material, looks for unique hashes, or digital fingerprints, to remove Islamic State videos and other similar material, two sources familiar with the process told the news agency. Such technology could be used to prevent reposts of content already deemed unacceptable but not identify new extremist content.
Facebook and Google have not made any comments on the reports as yet. In December, President Barack Obama asked the web's social-media giants to help prevent terrorist attacks by monitoring hateful content and removing speech as well as terrorist activity that appears on their networks.
Facebook, Twitter, Microsoft and YouTube in May agreed to a new European Union code of conduct against illegal hate speech and terrorist propaganda posted online. Under the new rules, they have committed to reviewing within 24 hours of receipt the majority of notifications about a social media post that may contain hate speech. They also agreed to remove the post if necessary.