The first day of the UK’s AI Safety Summit will see delegates focus on the risks posed by next-generation frontier models. In two roundtable discussions, the attendees will talk about how developers should safely scale models, what the international community should do, and how policymakers can mitigate risk. The summit has been billed as the UK’s opportunity to put itself at the centre of the global debate on AI safety, but many in the industry have criticised the way it has been organised, saying that it feeds the agenda of Big Tech companies.
The summit is designed to bring countries, academics and the biggest AI labs together to discuss how to safely utilise the most capable AI models. Microsoft, OpenAI, Google, and Anthropic are expected to be at the conference, which is being held at Bletchley Park on 1 and 2 November.
Details have been drip-fed for the past few months, and the latest update, released today by the Department of Science, Innovation and Technology (DSIT), includes the agenda for the first day, confirming the focus on models currently in development rather than AI tools already in use. This will include GPT-5 from OpenAI, Claude 3 from Anthropic, and Google’s Gemini.
This approach has been criticised by civil society groups, AI start-ups and privacy campaigners. They argue that current AI technologies present a real danger, including around the use of facial recognition and other forms of biometric analysis. The focus on future risk ties into the government’s approach to AI regulation, with a look at ways to safely allow innovation using AI rather than direct regulation.
DSIT argues that the next-generation models present the biggest risk and so is the most important place to start. “These are the most advanced generation of highly capable AI models, most often foundation models, that could exhibit dangerous capabilities,” a spokesperson said. “It is at the frontier where the risks are most urgent given how fast it is evolving, but also where the vast promise of the future economy lies.”
Digital ministers from around the world, civil society groups, and the largest AI companies will begin with a discussion of the risks emerging from the rapid advances of AI, before moving on to examine how to capitalise on its benefits safely.
“AI holds enormous potential to power economic growth, drive scientific progress and deliver wider public benefits, but there are potential safety risks from frontier AI if not developed responsibly,” summit organisers warned.
The day will begin with sessions on understanding the national security risks frontier AI presents as well as the dangers a loss of control over the model could bring. There will also be a discussion on issues surrounding misinformation, election disruption and an erosion of social trust as a result of the ability of AI to create fake material.
The second half of the day will involve a study into how to utilise the models safely, with delegates considering how risk thresholds, effective safety assessments, and robust governance and accountability mechanisms can be defined. The delegates will then look at how national policymakers can better manage the risks and harness the opportunities of AI to deliver economic and social benefits.
The final session of the first day will be a panel discussion on the transformative opportunities of AI for the “public good” in the short and long term. This will include a look at how teachers and students can use AI in education.
New £400k AI risk challenge
The agenda comes as the DSIT also unveiled a £400,00 investment fund called the Fairness Innovation Challenge, designed to support schemes offering solutions to AI bias and discrimination. Winners of up to £130,000 investments will be those offering a new approach to the bias issue through a wider social context in model development.
Fairness in AI systems is one of the government’s key principles for AI, as set out in the AI Regulation White Paper and part of the agenda for the upcoming summit. DSIT said AI is a powerful tool for good, presenting near-limitless opportunities to grow the global economy and deliver better public services.
In the UK, the NHS is already trialling AI to help medical professionals identify cases of breast cancer, develop new drugs and improve patient outcomes. The government is also using it to tackle climate change and other challenges but the risks have to be identified and solutions found for this to be a viable technology and scale.
Participants in the challenge will have access to King’s College London’s generative AI model built on anonymised records of ten million British NHS patients and has been built to predict possible health outcomes. Part of the challenge will see them work on potential bias in the model. The second part includes presenting solutions to tackle discrimination in their own unique models and focus areas.
Secretary of State for Science, Innovation and Technology, Michelle Donelan, said it’s important to face up to the risks of frontier AI in order to “reap the enormous benefits this transformative technology has to offer”. She added: “AI presents an immense opportunity to drive economic growth and transformative breakthroughs in medicine, clean energy and education. Tackling the risk of AI misuse, so we can adopt this technology safely, needs global collaboration.”