The UK needs stronger artificial intelligence regulation if it wants to set the global agenda. That is the view of a new report from the Ada Lovelace Institute, the UK’s AI research organisation, which says the plans set out in the government’s AI white paper are lacking. The report calls for an AI ombudsman, stricter legislation to protect users from harm and mandatory reporting from foundation model developers.

The Ada Lovelace Institute, named after one of the first computer programmers, pictured, has launched a new report on AI regulations. (Photo by Donaldson Collections/Getty Images)

Demand for AI services and technology has grown from a trickle to a flood in recent months, spurred by the success of OpenAI’s chatbot ChatGPT. While tech vendors have rushed to infuse their products with AI, the heightened interest in the technology has prompted a rush to regulation, with approaches varying from strict to almost non-existent.

The EU’s new AI Act puts requirements around transparency and safety on developers, as well as limits on high-risk use cases. In the UK, the approach has been outlined through the AI White Paper, published earlier this year. It sets out a “light-touch, pro-innovation” approach to regulation, leaving the burden on existing regulators.

This Ada Lovelace Institute paper, entitled Regulating AI in the UK, comes on the same day UK Foreign Secretary James Cleverly is set to call for a coordinated international response to AI. Chairing a session of the UN Security Council, he is expected to say that no country will be untouched by AI “so we must involve and engage the widest coalition of international actors from all sectors”.

The UK approach has been widely criticised, labelled by some experts as “no regulation at all”. The UK Labour Party has called for a national body to oversee AI regulation, as well as stricter reporting requirements, and the SNP in Scotland wants a national discussion. Rishi Sunak is pushing for global cooperation and standards, announcing a global AI safety summit in the UK to be held later this year.

The problem with this approach, warns the Ada Lovelace Institute report, is that without effective and robust national standards and regulations it will be impossible to get international agreement on how AI should be regulated. It says: “The UK must strengthen its AI regulation proposals to improve legal protections, empower regulators and address urgent risks of cutting-edge models.”

Coverage, capability and urgency

The report analyses different facets of the UK’s AI regulation plans, including the white paper, the Data Protection and Digital Information Bill, the summit proposal and the £100m Foundation Model Taskforce headed up by entrepreneur Ian Hogarth.

It outlines a trio of tests that can be used to monitor the UK’s approach to AI regulation and provides recommendations to ensure they can be met. These recommendations were drawn up after workshops with experts from industry, civil society and academia, and put through independent legal analysis before publication.

The first test is coverage, specifically, to what extent legislation covers the development, use and risk of AI. They found that current regulations and legislation leave many aspects without regulatory coverage. This includes recruitment, policing and government itself.

The government approach set out in the white paper is to rely on existing legislation, rather than create custom AI legislation, but legal analysis by data rights legal company AWO found that the protections from UK GDPR, the Equality Act and other laws often fail to protect people from harm or provide a viable route to redress.

The institute recommends the creation of a new AI ombudsman that would directly support anyone or any organisation impacted by AI. The report’s authors also recommend reviewing existing protections, legislating to introduce better protections and rethinking the data protection bill to consider its implications on AI regulation.

The second test is on resourcing and capability. The institute expressed concern over whether existing regulators or others involved in settings standards and guidelines would have the resources necessary to effectively do the job they’ve been tasked to complete. This includes ensuring regulators have the necessary powers to take action where needed.

AI ombudsman and statutory principles

The authors recommend creating a new statutory duty requiring them to consider AI principles. This could include common sets of powers for all regulators and a dramatic increase in funding for AI safety research. It also proposes setting up funding to allow civil society to be involved in AI regulation and not just industry or government.

Echoing comments from Labour on the need to speed up AI regulation, the institute says the current government timeline of a year to evaluate and iterate is too slow. There are significant harms associated with AI use today, and these are being felt disproportionately by the most marginalised in society. “The pace at which foundation models are being utilised risks scaling and exacerbating these harms,” the report warns.

The report suggests a need for robust governance of foundation models, underpinned by legislation and a review of all existing legislation against those models. They have also called for mandatory reporting requirements for foundation model developers such as OpenAI, Anthropic and Google’s DeepMind. Pilot projects to develop better expertise and monitoring should also be launched in government and assurances of a diverse range of voices at the AI Safety Summit, not just industry and government.

Michael Birtwistle, associate director at the Ada Lovelace Institute, said the government recognises that the UK has a unique opportunity to be a world leader in AI regulation but its credibility rests on the ability of the government to deliver a world-leading regulatory regime at home before pushing for global agreement. “Efforts towards international coordination are very welcome, but they are not sufficient,” he said. “The government must strengthen its domestic proposals for regulation if it wants to be taken seriously on AI and achieve its global ambitions.”

Alex Lawrence-Archer, a solicitor at AWO, the legal company used by the institute to review existing legislation, said for ordinary people to be effectively protected “we need regulation, strong regulators, rights to redress and realistic avenues for those rights to be enforced”. Adding that “our legal analysis shows that there are significant gaps which mean that AI harms may not be sufficiently prevented or addressed, even as the technology that threatens to cause them becomes increasingly ubiquitous.”

Read more: UK AI taskforce gets £100m to take on ChatGPT