The UK’s much-hyped AI safety summit, announced by Prime Minister Rishi Sunak earlier this year, is set to be held early in November, it has been revealed. The aim is to bring together government, industry and academia from around the world to set common safety standards for AI. But it is vital a wide variety of voices are heard and not just those from the Big Tech companies developing the most popular AI models, Tech Monitor has been told.
It will be the first global summit of its kind focused on the safety of AI, particularly foundation and large language models like those behind tools such as OpenAI’s ChatGPT. It is set to be held at Bletchley Park, in Buckinghamshire which was home to allied code-breakers during World War II, and the purported birthplace of the personal computer.
Three of the world’s largest AI labs – OpenAI, Google’s DeepMind and Anthropic – are expected to be in attendance as will Microsoft and other Big Tech companies with a focus on artificial intelligence. Earlier this month the government confirmed Matt Clifford, CEO of Entrepreneur First and Jonathan Black, Heywood Fellow at the Blavatnik School of Government at the University of Oxford would lead event planning.
Specifics of the event haven’t been released but the government promised details will be set out “in due course”. The biggest challenge appears to be finding a date that works for the major AI national powers and the big companies which doesn’t clash with other major events.
While the concept of the summit came out of a meeting between Sunak and US President Joe Biden, its believed the UK is keen to get China involved as it is a leading AI power. The aim is to find common ground on regulation, safety and guardrails.
Outside of the EU most AI regulation has been on a voluntary, industry-led basis. The European Union is in the final stages of approving its comprehensive AI act that covers all aspects and uses of the technology. It has some proscriptive rules surrounding training data and reporting, as well as prohibiting “harmful uses” that infringe on privacy.
The UK and the US have so far focused on ensuring the safety of future models, and taking a pro-innovation approach. The US recently signed a voluntary agreement with the big labs to have them share early access of their large future models with AI safety researchers before releasing them to the public.
AI safety and the search for common ground
The aim of the UK AI summit is to find a degree of common ground, to ensure a consistent global approach to AI regulations and safety that doesn’t leave one country disadvantaged over another. Sunak has said he wants to see the establishment of global oversight bodies similar to those used in the regulation of nuclear energy.
Ryan Carrier, CEO of AI standards organisation ForHumanity, said the global summit starts from what he considers the correct perspective; safety and protection from harm. “This is ‘the right’ perspective because corporations will advance benefits and innovation, but often at the expense of safety until they are otherwise held accountable,” Carrier says. “The UK can lead in two ways; first by establishing that innovation ought to be achieved simultaneously with safety testing, similar to successful industries such as pharmaceuticals and transportation, instead of backfilling risk mitigations and treatments.”
Carrier says the UK can also lead by ensuring a pool of diverse inputs from a range of stakeholders are in place throughout the safety assessment process. “The UK has an integrated approach that is inclusive of developers, academia, government, and civil society that can encourage other countries to mimic the inclusion of a wide and comprehensive pool of stakeholders that increases the chances to improve safety,” he explains.
“By holding this summit, the UK is equipped to change the narrative of advancement at any cost to one that recognizes that safe advancement leads to a more sustainable future, founded in trust and benefits that are accrued to a broader spectrum of the citizenry and not held exclusively by corporations.”
This is “a reasonable and achievable goal for a global summit on AI Safety,” he adds.
Jaeger Glucina, chief of staff and managing at AI lawtech vendor Luminance, said the government inviting executives from the leading AI labs means Big Tech will cement its place at the forefront of the global AI conversation. She says the reality is that this cohort of companies “represents a collection of industry leaders with a significant resource advantage that will all be approaching the summit with a good deal of self-interest.”
Glucina argues that “in order to be truly successful, Sunak’s summit should – and must – focus on advancing AI itself. By only inviting the ‘big players’ in the space, it will risk monopolisation, especially in areas such as regulation.”
She adds it is “vital for the government to ensure that the voices of all those in AI are captured at the summit,” and concludes: “Indeed, the AI conversation should be a global and inclusive effort. It is only then that Sunak will achieve his ambition for the UK to become an AI powerhouse.”