In a bid to erase extremist content from YouTube, Google is planning to put 10,000 staff to the task of monitoring and removing offending material.
The new cohort of content monitors are set to see action early next year, with the CEO of YouTube saying she is aware of how some are using the site to mislead, manipulate, harass and harm.
With the UK government calling for a clamp down on the availability of violent extremist content, pressure has been mounting on YouTube and subsequently Google to take action. YouTube has been engaged in work to achieve this since the summer, with new technology also being drafted in.
YouTube has faced other pressure recently regarding consistent abuses of its content rules, fuelling arguments that the site is not able to control the influx of material.
In a blog post on the announcement, Susan Wojcicki, CEO, YouTube, said: “In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies.”
Machine learning technologies are becoming more frequently involved in policing online content, but the YouTube CEO says the process is still reliant on human involvement.
“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future,” said Wojcicki.