View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 25, 2023

UK government plans charm offensive ahead of AI Safety Summit

The summit in November will focus on next-generation frontier models, but fears had been raised Big Tech would set the agenda.

By Ryan Morrison

Ahead of the UK’s hotly anticipated AI safety summit, the government has announced it will engage with civil society groups, academics and charities to examine different aspects of risk associated with AI.

Technology Secretary Michelle Donelan will meet civic society groups ahead of November’s AI summit. (Photo by Fred Duval/Shutterstock)

The AI Safety Summit is set to be held on 1 and 2 November in Bletchley Park, the home of British code-breakers during the Second World War and one of the birthplaces of modern computing. Three of the world’s largest AI labs – OpenAI, Google’s DeepMind and Anthropic – are expected to be in attendance as will Microsoft and other Big Tech companies. As reported by Tech Monitor, this led to concerns that the agenda would be shaped by the interests of Big Tech.

To combat these concerns the Department for Science, Innovation and Technology (DSIT) has partnered with groups like techUK, The Alan Turing Institute and the Royal Society to host a series of talks, debates and events in the run-up to the summit. This will include an exploration of AI in different industries such as healthcare and education.

While the main focus of the Bletchley Summit will be on frontier models – the next-generation AI tools like GPT-5 from OpenAI, Google’s Gemini and Claude 3 – the third-sector events will take a broader view of the impact of AI on society. This also builds on a key objective set out by the government to utilise “AI for good”, including in public services and the NHS.

DSIT says the reason for focusing the summit on frontier models is due to the significant risk of harm they pose and the rapid pace of development. There will be two key areas within the summit: misuse risk, particularly around ways criminals can use AI in biological or cyberattacks, and loss of control that could occur if AI doesn’t align with human values.

AI Safety Summit to focus on frontier AI models

A spokesperson said the focus on this at the summit will allow for conversations on the ways nations can work together to meet the challenges, as well as combat the misuse of models and use AI for “real, tangible public good across the world”. A week after the UK also launched an AI for international development campaign at the UN to utilise artificial intelligence to boost growth in developing nations and identify potential crises in advance.

Technology Secretary Michelle Donelan said in a statement that AI will transform our lives so it is vital to manage risks. “Artificial intelligence will undoubtedly transform our lives for the better if we grip the risks,” Donelan said. “We want organisations to consider how AI will shape their work in the future, and ensure that the UK is leading in the safe development of those tools. I am determined to keep the public informed and invested in shaping our direction, and these engagements will be an important part of that process.”

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

Part of that process will also see public engagement events in the run-up to and after the summit in November. This will include a Q&A with Matt Clifford, the prime minister’s representative for the AI Safety Summit on X, the platform formerly known as Twitter, on 2 October and a Q&A on LinkedIn with Donelan on 18 October. The keynotes from the summit will also be live-streamed on social media.

Speaking to Tech Monitor last month, Ryan Carrier, CEO of AI standards organisation ForHumanity, said the global summit starts from what he considers the correct perspective: safety and protection from harm. “This is the ‘right’ perspective because corporations will advance benefits and innovation, but often at the expense of safety until they are otherwise held accountable,” Carrier said.

He added that the UK can lead in two ways, by establishing that innovation should be achieved simultaneously with safety testing and that there is a diverse range of voices involved in the development of standards and regulations.

Read more: Five priorities set for UK’s AI Safety Summit

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU