The UK’s Office of Communications (Ofcom) has issued an open letter to online service providers in the United Kingdom, urging them to prevent their platforms from being used to incite hatred and violence.

This directive follows a series of recent violent incidents in the UK.

The letter states that existing regulations, prior to the Online Safety Act, require UK-based video-sharing platforms to protect users from content likely to incite violence or hatred. The regulatory authority expects these platforms to improve their systems and processes to prevent and respond to the spread of harmful video content linked to recent events.

Additionally, the Online Safety Act introduces new responsibilities for online services, including the assessment and mitigation of risks related to illegal content. This content can involve hatred, disorder, violence provocation, and some types of disinformation.

Once Ofcom publishes its final codes of practice and guidance later this year, regulated services will have three months to assess and address the risk of illegal content. They must take steps to prevent such content from appearing and remove it swiftly when detected.

The regulatory body notes that major websites and applications will need to apply their terms of service more rigorously, often involving bans on hate speech, violence incitement, and harmful disinformation.

Ofcom has consulted on risk assessment guidance and codes of practice regarding illegal harms. These outline expected measures in governance, content moderation, user reporting, and account removal.

Draft guidance has also been issued to help companies judge the legality of content or activities. All proposals consider the importance of protecting freedom of expression.

Ofcom’s Group Director for Online Safety, Gill Whitehead, noted the expectation of continued engagement with companies to address specific challenges. She praised the proactive steps some services have taken in response to recent violence in the UK.

Whitehead emphasised that with new safety duties under the Online Safety Act forthcoming, there is no reason to delay making sites and applications safer for users.

The UK Online Safety Act, which became law in October 2023, establishes a comprehensive framework aimed at improving online safety. This legislation targets a range of online harms by imposing stringent requirements on digital and social media companies to ensure user protection against illegal and harmful content.

Key elements include the enforcement of systems to manage and swiftly remove harmful content, including cyberflashing, threatening communications, and other illegal activities.

The Act is regulated by Ofcom, which now possesses enhanced powers to enforce the framework. Responsibilities under the Act extend to various types of online platforms and include proactive measures to protect children from harmful content, as well as offering adults more control over the content they encounter online.

Companies are also obligated to implement age verification technologies and uphold consistent enforcement of their own terms of service.

In practice, the Online Safety Act not only aims to reduce the risks of illegal content but also ensures that content harmful to children is rigorously controlled.

Platforms must categorise content appropriately and provide tools that allow users to filter out unwanted interactions and content, enhancing the overall safety and integrity of online environments.