Ahead of the UK’s hotly anticipated AI safety summit, the government has announced it will engage with civil society groups, academics and charities to examine different aspects of risk associated with AI.
The AI Safety Summit is set to be held on 1 and 2 November in Bletchley Park, the home of British code-breakers during the Second World War and one of the birthplaces of modern computing. Three of the world’s largest AI labs – OpenAI, Google’s DeepMind and Anthropic – are expected to be in attendance as will Microsoft and other Big Tech companies. As reported by Tech Monitor, this led to concerns that the agenda would be shaped by the interests of Big Tech.
To combat these concerns the Department for Science, Innovation and Technology (DSIT) has partnered with groups like techUK, The Alan Turing Institute and the Royal Society to host a series of talks, debates and events in the run-up to the summit. This will include an exploration of AI in different industries such as healthcare and education.
While the main focus of the Bletchley Summit will be on frontier models – the next-generation AI tools like GPT-5 from OpenAI, Google’s Gemini and Claude 3 – the third-sector events will take a broader view of the impact of AI on society. This also builds on a key objective set out by the government to utilise “AI for good”, including in public services and the NHS.
DSIT says the reason for focusing the summit on frontier models is due to the significant risk of harm they pose and the rapid pace of development. There will be two key areas within the summit: misuse risk, particularly around ways criminals can use AI in biological or cyberattacks, and loss of control that could occur if AI doesn’t align with human values.
AI Safety Summit to focus on frontier AI models
A spokesperson said the focus on this at the summit will allow for conversations on the ways nations can work together to meet the challenges, as well as combat the misuse of models and use AI for “real, tangible public good across the world”. A week after the UK also launched an AI for international development campaign at the UN to utilise artificial intelligence to boost growth in developing nations and identify potential crises in advance.
Technology Secretary Michelle Donelan said in a statement that AI will transform our lives so it is vital to manage risks. “Artificial intelligence will undoubtedly transform our lives for the better if we grip the risks,” Donelan said. “We want organisations to consider how AI will shape their work in the future, and ensure that the UK is leading in the safe development of those tools. I am determined to keep the public informed and invested in shaping our direction, and these engagements will be an important part of that process.”
Part of that process will also see public engagement events in the run-up to and after the summit in November. This will include a Q&A with Matt Clifford, the prime minister’s representative for the AI Safety Summit on X, the platform formerly known as Twitter, on 2 October and a Q&A on LinkedIn with Donelan on 18 October. The keynotes from the summit will also be live-streamed on social media.
Speaking to Tech Monitor last month, Ryan Carrier, CEO of AI standards organisation ForHumanity, said the global summit starts from what he considers the correct perspective: safety and protection from harm. “This is the ‘right’ perspective because corporations will advance benefits and innovation, but often at the expense of safety until they are otherwise held accountable,” Carrier said.
He added that the UK can lead in two ways, by establishing that innovation should be achieved simultaneously with safety testing and that there is a diverse range of voices involved in the development of standards and regulations.