Limiting the UK’s AI Safety Summit to just 100 delegates from government, academia and the largest technology companies is a mistake, experts have warned. The warning comes after one of the event chairs revealed that attendance at the landmark summit would be restricted to the businesses building the most advanced AI models. But this risks turning it into a “closed door, undemocratic meeting with industry”, Tech Monitor has been told.
Writing on X, Matt Clifford, who is co-organising the Bletchley Summit on behalf of the government, said the decision was to limit the scope of the event to focus on those companies building future models that pose the highest risk. Clifford said that discussions will continue after the two-day event.
Prime Minister Rishi Sunak announced the summit to much fanfare earlier this year. It is a bid to put the UK at the centre of the conversation around AI safety, with Sunak and his government keen to talk up the nation as a destination for AI investment.
But fears have been raised that the agenda will be set by Big Tech companies. The Department for Science, Innovation and Technology (DSIT) tried to quell these worries last week, announcing it will engage with civil society groups, academics, and charities to examine different aspects of risk associated with AI. This will include a series of fringe events and talks. However, due to the severity of risk associated with frontier models, the department says the event itself will retain a narrow focus.
Frontier models are defined as any AI model larger or more powerful than those currently available. Clifford in his X thread that it would primarily be those due for release in 2024. This will likely include multimodal models like the upcoming GPT-5 from Microsoft-supported OpenAI, Google’s Gemini, and Claude 3 from Anthropic.
“There’ll be about 100 attendees, roughly split between Cabinet ministers from around the world, CEOs of companies building AI at the frontier, academics, and representatives of international civil society,” Clifford wrote. “There is no more vocal champion for startups than me! This summit, though, is narrowly focused on frontier risk, so it’s appropriate that the company attendees are those building the most powerful models.”
AI safety summit must ‘focus on future risk’
He said it isn’t a case of letting the big tech giants “pull up the drawbridge”, adding that he is fully aware of that risk but it is about achieving the opposite. “Companies building systems with potentially dangerous capabilities should be subject to greater scrutiny; companies with a narrower focus should be free to innovate,” he said.
The AI Safety Summit is set to be held on 1 and 2 November in Bletchley Park, the home of British code-breakers during the Second World War and one of the birthplaces of modern computing.
Ryan Carrier, CEO of AI standards group ForHumanity, says limiting attendance at the summit will do nothing to quell the concerns of the wider tech community that the interests of the major AI labs will dominate. He argues the event appears to be designed to foster the government’s agenda of “pro-innovation” at the expense of impacted groups and says it risk becoming a “closed door, undemocratic meeting with industry”.
Carrier says: “Failure to assess risk from AI from the perspective of all impacted stakeholders is to ensure that harms go unaddressed and it is an important opportunity lost”, and believes that focusing on frontier models and major developers “assumes that government and corporations have sufficient perspective to know and understand the harms and it wastes resources readily at the government’s disposal.” He adds: “It becomes lip service, not genuine care for impacted persons who are now voiceless”
Ekaterina Almasque, general partner at deep tech venture capital company OpenOcean said key players in the industry should have a seat at the table, but excluding prominent start-ups risks decisions being made without critical input from those at the frontlines.
“To properly regulate AI in a way that fosters innovation, we need to make every effort to connect investors, startups, and policymakers,” she says. “This involves increasing R&D budgets, creating sovereign funds to support strategic initiatives, and attracting top talent into start-ups. However, we cannot adequately take these steps without the presence and perspective of the startups themselves.”
Speaking late last month, Michael Birtwistle, associate director of law and policy at the Ada Lovelace AI research institute, said: “We’ve welcomed the government’s commitment to international efforts on AI safety but are concerned that the remit of the summit has been narrowed to focus solely on the risks of so-called ‘frontier AI’.”
OpenOcean’s Almasque adds: “If the UK government wishes to turn the UK into a new Silicon Valley, the industry needs grassroots support.
“It is vital that industry voices are included when shaping regulations that will directly impact technological development. Many countries around the world are making these efforts to back their AI ecosystem. The UK must do the same if it is to avoid falling behind its peers.”
Tech Monitor has approached DSIT for comment.