The UK will legislate on AI risks next year, the Secretary of State for Science, Innovation, and Technology (DSIT) has confirmed. Addressing the Financial Times‘ Future of AI summit, Peter Kyle said that the government aims to implement a legal framework for AI and strengthen the infrastructure required to promote development in the sector.

Kyle explained that Britain’s current voluntary AI testing agreements are functioning but require a legally binding element for leading developers. The upcoming AI bill, to be presented in the current parliamentary session, will formalise these voluntary accords. It will also grant the AI Safety Institute independence from DSIT by converting it into an arms-length government body.

At last year’s AI safety summit, hosted by the UK, major companies, including OpenAI, Google DeepMind, and Anthropic, signed a non-binding agreement. This agreement allows partner governments to evaluate their large language models for risks before release. Despite optimism about AI, he highlighted the importance of assuring the public about risk management measures.

AI risks legislation to focus on frontier models and investment in computing power

The proposed legislation will focus on advanced “frontier” models that generate text, images, and videos. Additionally, Kyle committed to investing in computing power to enable the UK to develop its own sovereign AI models. This announcement follows the government’s decision to cancel an £800m exascale supercomputer project at Edinburgh University greenlit by the previous Conservative administration.

Defending the cancellation of the Edinburgh exascale project, the DSIT secretary described it as a consequence of the financial situation inherited from the Conservatives. “I didn’t cut anything because you can’t cut something that doesn’t exist,” Kyle said, referring to the previous administration’s failure to allocate funding for the initiative.

The technology secretary also acknowledged that the government cannot singlehandedly provide the estimated £100bnn required for computing infrastructure, necessitating collaboration with private companies and investors.

To bridge the gap in infrastructure development, Kyle outlined upcoming plans for both sovereign and general computing capacities. These initiatives aim to support researchers and businesses across the UK, ensuring the necessary infrastructure is in place for continued AI growth. The DSIT secretary further pledged that these initiatives would be “funded, costed and delivered,” providing concrete support for the UK’s AI ambitions.

On Wednesday, the DSIT announced the launch of a new AI assurance platform to help businesses mitigate risks associated with AI tool implementation. The unnamed platform will provide guidance and practical resources, outlining “clear steps” for conducting impact assessments and evaluating data to identify bias. Additionally, a new self-assessment tool will be introduced to assist small and medium-sized enterprises (SMEs) in implementing “responsible AI management practices.”

Read more: UK government unveils targeted AI assurance support for businesses