The UK government has been urged to expand the scope of its upcoming AI safety summit. The international event in November will bring together political and business leaders from around the world and is currently set to focus on next-generation, highly advanced frontier AI models, but AI research body the Ada Lovelace Institute has warned that other forms of artificial intelligence also pose risks and should be considered.
The Department for Science, Innovation and Technology (DSIT) says it will engage with civil society groups, academics, and charities to examine different aspects of risk associated with AI. This will include a series of fringe events and talks in the run-up to the Bletchley Summit. However, due to the severity of risk associated with frontier models, the department says they would remain the focus.
Frontier models are defined as any AI model larger or more powerful than those currently available. This will likely include multimodal models like the upcoming GPT-5 from Microsoft-supported OpenAI, Google’s Gemini, and Claude 3 from Amazon-backed Anthropic.
DSIT says the reason for focusing the summit on frontier models is due to the significant risk of harm they pose and the rapid pace of development. There will be two key areas within the summit: misuse risk, particularly around ways criminals can use AI in biological or cyberattacks, and loss of control that could occur if AI doesn’t align with human values.
Current AI systems can cause significant harms
Michael Birtwistle, associate director of law and policy at the Ada Lovelace Institute, said there is considerable evidence current AI systems are causing significant harm. This ranges from “deep fakes and disinformation to discrimination in recruitment and public services”. Birtwistle said that “tackling these challenges will require investment, leadership and collaboration”.
He said: “We’ve welcomed the government’s commitment to international efforts on AI safety but are concerned that the remit of the summit has been narrowed to focus solely on the risks of so-called ‘frontier AI’.
“Pragmatic measures, such as pre-release testing, can help address hypothetical AI risks while also keeping people safe in the here-and-now.”
While international cooperation is an important part of the AI safety puzzle, Birtwistle said any action will need to be grounded in evidence and backed up by robust domestic legislation. This includes bias, misinformation, and data privacy.
“Artificial intelligence will undoubtedly transform our lives for the better if we grip the risks,” Technology Secretary Michelle Donelan said last week when unveiling the introduction to the summit documentation.
“We want organisations to consider how AI will shape their work in the future, and ensure that the UK is leading in the safe development of those tools.
“I am determined to keep the public informed and invested in shaping our direction, and these engagements will be an important part of that process.”