View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Rishi Sunak meets with AI industry leaders to discuss technology safeguards

The government says it wants developers to put guardrails in place so AI is deployed ethically.

By Ryan Morrison

Prime Minister Rishi Sunak met with leaders from three of the largest AI labs to discuss the need for guardrails and limitations on the technology. He also extolled the benefits of the UK taking an “agile” approach to AI regulation. In practice, this is likely to mean the UK waiting to see what the US does and following that approach, one expert told Tech Monitor.

Rishi Sunak met with leaders from OpenAI, Google Deep Mind and Anthropic on the role of guardrails in AI (Photo: UK Government)
Rishi Sunak met with leaders from OpenAI, Google Deep Mind and Anthropic on the role of guardrails in AI. (Photo courtesy of UK Government)

Representatives from OpenAI, Google Deep Mind and Anthropic were at the meeting, held in Downing Street last night. Between them, the companies create some of the most high-profile foundation models powering chatbots like ChatGPT, Bard and Claude. They are also leaders in the development of artificial general intelligence (AGI), regarded as the point where AI can effectively think for itself.

The group discussed what actions are needed to ensure AI is developed in a safe and responsible way, a Number 10 statement said. Sunak told the group that the success of the technology will rely on having the right guardrails in place to give the public confidence in its safety. “Done safely and securely, AI has the potential to be transformational and grow the economy,” Sunak said. “It was an important discussion and between us we covered a lot of ground.” 

The Prime Minister has repeatedly stated his ambition for the UK to become a science and technology superpower by 2030. “Harnessing the potential of AI provides huge opportunities to grow our economy, create better-paid jobs and drive advances in healthcare and security,” Sunak said. But he warned, “we must ensure this new and exciting technology is developed safely and responsibly.”

Sunak will also meet with Google CEO Sundar Pichai on Friday to continue talks on the impact of generative AI and other foundation model technologies. While it will focus on AI, the meeting is expected to also examine how to make the UK an attractive place to do business.

According to an official statement on the meeting, the prime minister also extolled the UK’s ambitions to advance capability in AI and use it to deliver better outcomes for the public and improve the public services. This includes work through the Foundation Models Taskforce which was given a £100m start-up budget earlier this year

“The CEOs agreed to work closely with the Foundation Model Taskforce,” a government spokesperson said. “AI will improve life dramatically, from transforming industries to delivering scientific breakthroughs. The PM and CEOs committed to work together to ensure society benefits from such transformation.”

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

Guardrails for AI could be implemented

The impact of foundation AI models, such as those that power tools like ChatGPT and Midjourney, is already being felt throughout the economy. Companies are laying off thousands of staff due to AI making their roles redundant, some of the largest tech giants have integrated the technology throughout their business models and governments are scrambling regulators to examine how to contain the spread of the technology.

Adam Leon Smith, senior technologist and CTO of Neuro and Dragonfly, and an expert in AI standards and regulation, told Tech Monitor the guardrails mentioned by Rishi Sunak “are not some kind of mystical future thing we need”. Instead, Leon Smith argues, they are clear and relatively straightforward to implement.

“Regulators need to mandate that transparency about training and testing of systems is provided throughout the supply chain,” Leon Smith says. “Risk assessments need to be conducted and we need human oversight and real-time monitoring of AI systems where risks are present.” 

He added: “None of these present technical challenges. We wouldn’t put a household electrical device on the market without a regulatory ecosystem saying it was safe, why are we OK to do that with AI systems?”

Emma Wright, head of technology at Harbottle and Lewis, as well as counsel and director of the Institute of AI, said the need for guardrails is a focus of the G7 and shows a shift in approach from AI being seen simply as “a significant opportunity within a country’s industrial policies.”

So far 193 countries have signed up to the UNESCO Recommendation on Ethics in AI which focuses on putting human rights at the heart of AI development. This, she says will still need to be implemented at a national level but is a welcome sign. “Ultimately the reference to ‘agile’ approach to regulation is a likely signal that the UK will wait and then closely follow the US lead on how it chooses to regulate AI rather than the EU legislative approach,” Wright says.

Jamie Moles, senior technical manager at ExtraHop, which produces software to alert companies if data is put into tools like ChatGPT, says barriers and limitations are not the right approach. He believes it is too early for regulation of the technology as the industry is still in its infancy and people can only speculate on the impact it will have on productivity and jobs. “It’s better to let the tech develop a bit more and see how things pan out,” Moles says.

AI rules are a global challenge

Ryan Carrier, founder and CEO of ethical AI certification group ForHumanity says democratisation of the guardrails is important. “Given that these tools are already proven to be amplifiers of misinformation and disinformation, I think the lessons from Brexit and recent US Elections tell us that Big Tech and democracy are like oil and water – they don’t mix well together,” Carrier says. He adds: “All stakeholders, especially the citizens of the UK should have an important role in establishing the rules and guardrails for the use of powerful tools that impact them.”

Heather Dawe, head of data at digital transformation company UST says guardrails need to protect against the risk of bias and discrimination from AI models trained on biased data. “The guardrails for such AI can be based on existing anti-discrimination and equality and diversity laws,” she says. “Combining these laws with methods that explain the machine learning models that underpin AI, we can ensure that the models adhere to these laws.” 

Kriti Sharma, chief product officer for legal tech at Thomson Reuters says “fundamental guardrails” should be put in place quickly in the form of regulations. This, she says, will “help address key areas of concern such as transparency, bias, and accuracy – and help the industry first build trust, drive adoption, and in time, enable users to feel the productivity benefits that we know exist.”

Sharma feels that “one company cannot achieve this alone, it will require an industry-wide approach”. However, she says: “We need to get comfortable with the concept of regulating for what we know now – and being prepared to course correct as we go. Putting the right guardrails in place will ensure we reach communities with the benefits of AI, such as facilitating access to justice and driving financial inclusion.”

Read more: Nvidia cashes in on generative AI with record data centre income

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU