The UK and US have forged a new partnership to establish greater artificial intelligence (AI) safety globally. It comes into immediate effect and marks one of the world’s first such collaborative efforts between jurisdictions to establish greater AI safety and understanding.
The Memorandum of Understanding was signed on 1 April 2024 by UK technology secretary Michelle Donelan and US commerce secretary Gina Raimondo. It aims to accelerate robust evaluations of AI models, systems and agents amid AI’s rapid adoption across the globe. The partnership will facilitate joint efforts to test the safety of existing and emerging AI models through information sharing and personnel exchanges.
The move underlines the belief that US and UK both see AI regulation as a “shared global issue”, said UK secretary of state for science, innovation, and technology, Michelle Donelan.
Testing safety of AI models
In November 2023, the AI Summit hosted in Bletchley Park saw 28 countries, including the UK, US and China, sign the Bletchley Declaration, an agreement to work together on AI safety. At the event, the UK Prime Minister Rishi Sunak unveiled the world’s first AI institute, the AI Safety Institute (AISI) as part of Sunak’s plan to improve AI regulation.
This collaboration will enable the AISI and its US equivalent to analyse emerging AI models created by the likes of Google, Microsoft and OpenAI.
The AISI is chaired by tech investor and entrepreneur Ian Hogarth and has recruited researchers to embark on this analysis.
The partnership models the existing dynamic between the UK’s Government Communications Headquarters (GCHQ) and the US National Security Agency, both of which focus on intelligence and security.
“The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November, and I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly,” said Donelan.
Tackling risks of AI globally
This landmark partnership reflects the demand to research and implement clearer AI guardrails across borders to minimise cyber risks and misuse of data and AI amid growing global concerns about personal and national security.
On 13 March, the European Union (EU) passed the EU AI Act, the world’s first comprehensive set of rules for guardrailing AI. Just last week, the Biden Administration also announced its new AI policy which requires all US federal agencies to appoint chief AI officers to ensure greater levels of safety in government use of AI for the public.
“AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” said Raimondo. “Our partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”