View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

UK government puts up £50m in fresh funding for AI safety research

The cash will be spent building tools and programmes to help organisations role out artificial intelligence more safely.

By Ryan Morrison

The government has announced more than £50m funding for a range of artificial intelligence-related projects including £30m for the creation of a new responsible AI ecosystem. There will also be money for 13 projects aimed at using AI to reach net zero, and £2m for work on helping businesses adopt the technology.

Millions will be spent on projects to secure the use and deployment of AI (Photo: aslysun/Shutterstock)
Millions will be spent on projects to secure the use and deployment of AI. (Photo by aslysun/Shutterstock)

The funding was announced this morning by technology secretary Chloe Smith during her speech at London Tech Week. The cash comes from UK Research and Innovation (UKRI), and will fund programmes including two new AI research fellowships in the name of Alan Turing, worth £8m. They were awarded to Professor Michael Bronstein and Professor Alison Noble, both based at the University of Oxford.

Artificial intelligence has been listed as a “critical technology” by Rishi Sunak’s government, rapidly evolving in all areas of science, technology and society generally. The government see it as holding “massive potential benefits to the economy and society”.

The biggest funding block as part of this AI announcement is worth £31m and was awarded to a consortium dubbed “Responsible AI”. It is led by the University of Southampton with the aim of developing an innovation ecosystem for responsible and trustworthy AI. The goal is to use the controversial technology in a way that is “responsive to the needs of society”.

News of the fresh funding comes as the government seeks to understand the best way to regulate AI, including opening access to massive foundation models for safety research. During his London Tech Week keynote on Monday Prime Minister Sunak confirmed OpenAI, Anthropic and Google DeepMind had all agreed to provide early access to new models to researchers.

The Responsible AI consortium is being led by Professor Gopal Ramchurn and will work across universities, businesses, with the public and third-party sectors to build, fund and research that leads to a greater understanding of what responsible and trustworthy AI looks like.

They will also explore how to develop responsible AI, how to build it into existing systems, and the impacts it will have on society. The goal is to create a new UK-wide ecosystem through conversations on what AI should look like, working with policymakers on future policy and regulation and provide guidance for businesses on responsible AI deployment.

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

University of Cambridge’s Minderoo Centre for Technology and Democracy is part of the consortium. Executive Director Gina Neff, will direct the strategy group. She said the work will “link Britain’s world-leading responsible AI ecosystem and lead a national conversation around AI, to ensure that responsible and trustworthy AI can power benefits for everyone”.

Large-scale AI research in the UK

The work will include large-scale research programs, collaborations with academics and businesses, skills programmes for the public and industry on AI and white papers exploring the various possible approaches to AI policy and usage.

The second largest funding block is for 13 projects designed to help the UK hit net zero targets with the help of AI. The £13m will include work exploring sustainable land management, ways to accelerate energy-efficient CO2 capture, resilience improvements against natural hazards and extreme weather events, and ways to accelerate the selection of biofuel crops for higher yields.

The smallest grant is £2m for 42 small-scale feasibility studies in businesses as part of the BridgeAI programme. This includes accelerating the adoption of trusted and responsible AI technologies. They will range from tools to assess AI technologies for governance, fairness and transparency purposes, as well as an examination of privacy and security across sectors.

The projects selected from the feasibility study will share an additional £19m to create full solutions based on the initial ideas.

The first Turing AI World Leading Research Fellowships were announced in 2021 and this year’s awards will help Bronstein and Noble examine new AI approaches and tackle existing and known challenges facing the technology. 

Bronstein will create new mathematical frameworks for graph machine learning and using them to address drug and food design challenges. Noble is working on shared human-machine decision-making in healthcare imaging. This includes making new models of clinical tasks.

Kedar Pandya, executive director of cross-council programmes at the Engineering and Physical Sciences Research Council, said: “The UK’s expertise in the field of AI is a major asset to the country and will help develop the science and technology that will shape the fabric of many areas of our lives. That is why UKRI is continuing to invest in the people and organisations that will have wide-ranging benefit.

“For this to be successful we must invest in research and systems in which can have trust and confidence, and ensure these considerations are integrated in all aspects of the work as it progresses. The projects and grants announced today will help us achieve this goal.”

Read more: London Tech Week: Sunak touts UK AI safety research

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU