View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Regulators need more money to regulate AI in the UK effectively, says Commons committee

In its latest report, the House of Commons Committee endorsed the current government’s sectoral strategy for governing AI in the UK – but said that whoever wins the general election must be ready to legislate.

By Greg Noone

Funding for regulators overseeing AI in the UK is “clearly insufficient,” a parliamentary committee has warned. In its latest report, the House of Commons Science, Innovation and Technology Committee said that the £10m allocated to regulators by the current Conservative government was insufficient to meet the scale of the challenge posed by the technology to society and the wider economy. Despite this, the committee endorsed the broad thrust of government attitudes toward permissible uses for AI in the UK, which has been to let individual regulators arbitrate challenges related to the technology specific to their sector. 

“The next Government must announce further financial support, agreed in consultation with regulators, that is commensurate to the scale of the task,” said the report. “It should also consider the benefits of a one-off or recurring industry levy that would allow regulators to supplement or replace support from the Exchequer for their AI-related activities.”

The House of Commons illuminated at night, used to illustrate an article about AI in the UK.
The House of Commons’ Science, Innovation and Technology Committee has published a critical new report on government policy toward AI in the UK. (Photo by Shutterstock)

Legislation for AI in the UK possible, says committee

In a wide-ranging report, the committee also said that the UK government should be prepared to introduce legislation governing AI in the eventuality that gaps emerge in the capacity of regulators and industry to overcome challenges themselves – gaps partly identifiable, it added, in quarterly reviews of the efficacy of its overall strategy laid out before parliament. Moreover, it said, the government should also “provide further consideration of the criteria on which a decision to legislate will be triggered, including which model performance indicators, training requirements such as compute power or other factors will be considered.”

The committee welcomed other aspects of the government’s strategy for regulating AI in the UK, including its establishment of the Incubator for Artificial Intelligence (i.AI) and the AI Safety Institute. “It is a credit to the commitment of those involved that the AI Safety Institute has been swiftly established, with an impressive and growing team of researchers and technical experts recruited from leading developers and academic institutions,” it said. 

In a note of disquiet, however, the committee said that it was concerned by suggestions that the institute had not been given access to beta versions of AI models to perform pre-deployment safety testing. “The Bletchley Park Summit resulted in an agreement that developers would submit new models to the AI Safety Institute,” said its chair, Rt. Hon. Greg Clark. “We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing.”

Deepfake concern for the general election

The committee also repeated many concerns about how to regulate AI in the UK articulated by industry experts over the past decade: namely, that model developers and deployers should be rigorously held to account over how they have mitigated bias and inaccuracy in the outputs of their AI products. “Nobody who uses AI to inflict harm,” meanwhile, “should be exempted from the consequences, whether they are a developer, deployer or intermediary. The next Government, together with sectoral regulators, [should] publish guidance on where liability for harmful uses of AI falls under existing law.”

The government should also pay careful attention to the proliferation of deepfakes, said the report, as campaigning for the general election heats up. If online platforms are found to be slow to remove malicious deepfakes targeting politicians (inclusive of committee members, presumably), it added “regulators must take stringent enforcement action – including holding senior leadership personally liable and imposing financial sanctions.”

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Read more: UK government launches AI cybersecurity codes of practice

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.