View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
November 2, 2023updated 03 Nov 2023 11:19am

Details of UK AI Safety Institute revealed at Bletchley Park summit

Prime Minister Rishi Sunak hopes the institute will help the UK lead the conversation on AI safety.

By Matthew Gooding

Tech companies and governments from around the world have backed the UK’s plan for an AI Safety Institute after more details of the organisation were revealed at the AI Safety Summit at Bletchley Park.

Prime Minister Rishi Sunak speaks with US VP Kamala Harris at the end of the AI Summit at Bletchley Park. (Picture by Simon Dawson/No 10 Downing Street)

Prime Minister Rishi Sunak announced plans to create the safety institute, which will test new AI models to pinpoint potential safety issues, in a speech last week. Today it was revealed that the new body will build on the work of the UK’s frontier AI task force, and be chaired by Ian Hogarth, the tech investor who has been running the task force since it was created earlier this year.

Partners buy-in to Sunak’s UK AI Safety Institute

According to a brochure released today by the government, the institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, as well as more extreme risks such as humanity losing control of AI systems.

Hogarth will chair the organisation, with the Frontier AI Task Force’s advisory board, made up of leading industry figures, moving across to the institute, too. A CEO will be recruited to run the new organisation, which will work closely with the Alan Turing Institute for data science.

At the Bletchley Park summit, which concludes today, the new AI Safety Institute was backed by governments including the US, Japan and Canada, tech heavyweights such as AWS and Microsoft, and AI labs including Open AI and Anthropic.

Sunak said: “Our AI Safety Institute will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology.

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people. This is the right approach for the long-term interests of the UK.”

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

AI Safety Summit draws to a close

Whether the UK institute will become the global standard bearer for AI safety research is questionable given that the US government launched its own safety institute earlier this week. The UK says it has agreed to a partnership with the institute, as well as the government of Singapore, to collaborate on AI safety testing.

The first task for the institute will be to put in place the processes and systems to test new AI models before they launch, including open-source models, the government said.

Governments and tech companies attending the summit agreed to work together on safety testing for AI models, while Yoshua Bengio, a computer scientist who played a key role in the development of deep neural networks, the technology that underpins many AI models, is to produce a report on the state of the science behind artificial intelligence. It is hoped this will help build a shared understanding of the capabilities and risks posed by frontier AI. 

Sam Altman, OpenAI CEO, said: “The UK AI Safety Institute is poised to make important contributions in progressing the science of the measurement and evaluation of frontier system risks. Such work is integral to our mission – ensuring that artificial general intelligence is safe and benefits all of humanity – and we look forward to working with the institute in this effort.

The AI Safety Summit programme ended this afternoon, with Sunak holding a series of meetings with political leaders, including EU president Ursula von der Leyen. Later this evening he will take part in a question and answer session with Tesla CEO Elon Musk, who has not endorsed the new AI Safety Institute.

As reported by Tech Monitor, yesterday 28 countries, including the UK, US and China, signed the Bletchley Declaration, an agreement to work together on AI safety. The government also announced it is funding a £225m supercomputer, Isambard-AI, at the University of Bristol.

Read more: The UK is building a £225m AI supercomputer

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU