View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Future UK laws on AI to be informed by key ‘tests’

Keen on maintaining its light-touch regulatory posture, the UK government's intent is that certain thresholds must be met before it passes new AI laws.

By Greg Noone

The UK government will soon publish thresholds for the passage of new laws governing AI. According to the Financial Times, which broke the story, these new “tests” will define when and the extent to which the government will pass new legislation on the appropriate use of the technology. Some of the scenarios being contemplated include the possibility of major AI developers failing to abide by commitments to develop safe systems, or if the UK’s AI Safety Institute does not succeed in identifying risks in a new application that subsequently proliferate after its release.

Photo of web developers at London conference, used to illustrate a story about the UK government requiring certain tests be passed before it pushes new AI regulation.
Silhouettes of web developers at a conference in London. The UK government has so far maintained a ‘light touch’ approach to AI regulation, to which its expected publication of defined thresholds before it passes new legislation governing the technology is in keeping. (Photo by kovop/Shutterstock)

UK government’s ‘pro-innovation’ approach to AI

The publication of these tests will be in keeping with the UK’s light-touch approach to regulating AI. This was confirmed in a statement published by the government in November. “We will take action to mitigate risks and support safe and responsible AI innovation as required,” it said, but would maintain a “pro-innovation approach” in consultation close consultation with civil society and industry. This philosophy will also run through the “tests” to pass AI legislation to be proposed by the UK government, according to the FT, with the proviso that any new laws would not impair innovation without just cause. 

It is understood that the tests are set to be published as part of the consultation process for the government’s AI white paper, published in March 2023. The paper was not without its critics. According to the University of Birmingham’s Professor Karen Yeung and PhD candidate Emma Ahmed-Rengers, the document was an “inadequate basis for sound policy, let alone the foundations of an effective and legitimate regulatory framework that will serve the public interest”. Others, meanwhile, have pointed out that the gauntlet of regulating AI in the UK has begun to be taken up by sectoral regulators, with watchdogs including Ofcom and the Information Commissioner’s Office beginning to conduct algorithmic audits in their respective areas of jurisdiction. 

UK active on AI norms internationally

Another prospective threshold for legislative action by the UK government on AI would be if major industry players were not abiding by previous commitments to create safe and transparent systems. Extracting these pledges from the likes of OpenAI, Microsoft and Google has been a policy priority for the UK, which has proved especially active on the global stage in attempting to define international norms on AI. Its organisation of an international AI safety summit last year resulted in the Bletchley Declaration, where nations including the UK, US, India and China agreed to collaborate on researching the long-term risks and rewards of the technology. 

When approached for comment, the UK government’s Department of Science, Innovation and Technology refused to confirm whether the criteria for passing future AI legislation would be published imminently. “We set out our pro-innovation approach to regulating AI in our white paper last year and are working closely with regulators to make sure we have the necessary guardrails in place – many of whom have started to proactively take action in line with our proposed framework,” a spokesperson told Tech Monitor. “As the Technology Secretary said in December, we want to make sure we get this right and we will publish our response to the consultation shortly – in the meantime, we will not speculate on what or may not be included.”

Read more: CMA outlines principles for selling AI models

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.