View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
December 5, 2017

Google raises online army to fight YouTube extremism

Pressure is mounting on YouTube to prevent users breaching its content rules, with violent extremist content on the site causing concern.

By Tom Ball

In a bid to erase extremist content from YouTube, Google is planning to put 10,000 staff to the task of monitoring and removing offending material.

The new cohort of content monitors are set to see action early next year, with the CEO of YouTube saying she is aware of how some are using the site to mislead, manipulate, harass and harm.

With the UK government calling for a clamp down on the availability of violent extremist content, pressure has been mounting on YouTube and subsequently Google to take action. YouTube has been engaged in work to achieve this since the summer, with new technology also being drafted in.

Google raises online army to fight YouTube extremism

YouTube has faced other pressure recently regarding consistent abuses of its content rules, fuelling arguments that the site is not able to control the influx of material.

In a blog post on the announcement, Susan Wojcicki, CEO, YouTube, said: “In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies.”

NATO: Could cyber attack be the best form of defence?


Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester
Expect machine learning ‘arms race’ & IoT ransomware in 2018, says McAfee


Uber data breach: 2.7 million UK riders & drivers hit in cover-up


Machine learning technologies are becoming more frequently involved in policing online content, but the YouTube CEO says the process is still reliant on human involvement.

“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future,” said Wojcicki.

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.