Some of the largest AI labs including Anthropic, OpenAI and Google’s DeepMind will offer early access to new foundation models to UK researchers. The move, revealed by prime minister Rishi Sunak at London Tech Week, is to aid in safety and risk mitigation research.

Rishi Sunak described AI as transformative during a fireside chat with Google DeepMind founder Dennis Hassabis (Photo by Ian Vogler - WPA Pool / Getty Images)
Rishi Sunak described AI as transformative during a fireside chat with Google DeepMind founder Dennis Hassabis (Photo by Ian Vogler – WPA Pool / Getty Images)

The debate over how to manage risk and regulation of large general purpose AI models has become widely contested. All of the major AI labs are pushing for some degree of regulation but the extent to which that applies and how the burden is distributed isn’t clear.

AI safety training and regulation

Sunak told delegates during his keynote opening the 10th anniversary London Tech Week that cutting edge safety research would be a key component of the UK’s AI policy. This would see it partner with academia and the AI companies to ensure the technology was being safely developed, deployed and used throughout the economy.

Some of the funding for this research will come from the £100m foundation AI taskforce announced earlier this year. The group will also lead on proposing standards and guardrails that would then be put forward for international discussion.

Sunak said the labs were “committed to give early or priority access to models for research and safety purposes to help us understand risks of these systems.”

To date all of the major labs have worked on their own risk analysis research, including a recent proposal from Google DeepMind to introduce a framework for measuring severe risk from future advanced systems. This would see some of that handed to British academic institutions.

Sunak said it was vital that there was international cooperation on the standards and regulations introduced to govern AI, declaring that “AI doesn’t respect borders.” He highlighted the international AI summit due to happen later this year in the UK as an example, comparing it to the COP climate change summits that in the past saw global agreement to reduce average temperatures through reductions in carbon emitting activities. 

He said we need to “lead at home, lead overseas and lead change in our public services,” explaining the need for more AI use in health and education. “This idea that every job having AI as a copilot, making everyone’s job a little easier and a little more productive, replicated across an entire economy has the potential to be transformative,” the prime minister said.

Speaking during a fireside chat with Dennis Hassabis, founder of Google DeepMind, Sunak said there needs to be a balance on regulation between the need to support innovation and the need for appropriate protections, something he feels the UK has a good track record in achieving.

“We approach things with a principles first logic,” he said. “The government is focused on engaging with industry to make sure we understand how innovation is happening, creating a safe space to make it happen.” This could include liberal use of regulatory sandboxes and creating incentives to make the technology accessible to the entire population.

Benefits of AI highlighted at London Tech Week

Highlighting the number of AI-related companies opening offices in the UK, Sunak said it puts the country in a good position to create dialogue and drive the direction of standards. He explained that while there are real risks, not all risks should be dealt with the same way. 

“Government has to ensure everyone has access to education and skills at every point in their lives,” he said, talking about the fact AI is impacting jobs and changing the types of roles that are available He said this includes “changing how we fund education, so it is something you want to come back to over your entire career.”

Hassabis told delegates that there are “risks” with any new transformative technology. He said: “Different categories of risk need to be mitigated in different ways. We need to understand and research those systems to have a better handle on boundaries of those systems.”

He believes this will then let governments “put the guardrails in place”, adding: “The right way to proceed is with exceptional care, be optimistic about opportunities and use the scientific method to study and analyse these systems as they get more powerful.”

Tech Monitor has approached OpenAI, Anthropic and Google DeepMind for comment on how the model access will work. Anthropic was the only one to respond, saying: “We are in support of Prime Minister Sunak’s work on research and safety and look forward to collaborating with this effort and others like it.” 

Read more: Is UK government ready to abandon its approach to AI regulation?