View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 28, 2023updated 29 Sep 2023 10:10am

UK government urged to widen scope of AI safety summit beyond frontier models

The government's international automation get-together will focus on frontier models, but current systems can also be dangerous.

By Ryan Morrison

The UK government has been urged to expand the scope of its upcoming AI safety summit. The international event in November will bring together political and business leaders from around the world and is currently set to focus on next-generation, highly advanced frontier AI models, but AI research body the Ada Lovelace Institute has warned that other forms of artificial intelligence also pose risks and should be considered.

The Ada Lovelace Institute says there are models in use today that are capable of causing significant harm (Photo: Blue Planet Studio/Shutterstock)
The Ada Lovelace Institute says there are models in use today that are capable of causing significant harm. (Photo by Blue Planet Studio/Shutterstock)

The Department for Science, Innovation and Technology (DSIT) says it will engage with civil society groups, academics, and charities to examine different aspects of risk associated with AI. This will include a series of fringe events and talks in the run-up to the Bletchley Summit. However, due to the severity of risk associated with frontier models, the department says they would remain the focus.

Frontier models are defined as any AI model larger or more powerful than those currently available. This will likely include multimodal models like the upcoming GPT-5 from Microsoft-supported OpenAI, Google’s Gemini, and Claude 3 from Amazon-backed Anthropic.

DSIT says the reason for focusing the summit on frontier models is due to the significant risk of harm they pose and the rapid pace of development. There will be two key areas within the summit: misuse risk, particularly around ways criminals can use AI in biological or cyberattacks, and loss of control that could occur if AI doesn’t align with human values.

Current AI systems can cause significant harms

Michael Birtwistle, associate director of law and policy at the Ada Lovelace Institute, said there is considerable evidence current AI systems are causing significant harm. This ranges from “deep fakes and disinformation to discrimination in recruitment and public services”. Birtwistle said that “tackling these challenges will require investment, leadership and collaboration”. 

He said: “We’ve welcomed the government’s commitment to international efforts on AI safety but are concerned that the remit of the summit has been narrowed to focus solely on the risks of so-called ‘frontier AI’.

“Pragmatic measures, such as pre-release testing, can help address hypothetical AI risks while also keeping people safe in the here-and-now.”

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

While international cooperation is an important part of the AI safety puzzle, Birtwistle said any action will need to be grounded in evidence and backed up by robust domestic legislation. This includes bias, misinformation, and data privacy.

“Artificial intelligence will undoubtedly transform our lives for the better if we grip the risks,” Technology Secretary Michelle Donelan said last week when unveiling the introduction to the summit documentation. 

“We want organisations to consider how AI will shape their work in the future, and ensure that the UK is leading in the safe development of those tools.

“I am determined to keep the public informed and invested in shaping our direction, and these engagements will be an important part of that process.”

Read more: UK government plans charm offensive ahead of AI Safety Summit

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU