View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
March 15, 2023updated 17 Mar 2023 8:56am

UK government creates AI taskforce to look at foundation models

The new UK foundation model AI taskforce has been set up after calls were made for a sovereign UK large language model.

By Ryan Morrison

Artificial Intelligence has the potential to change the world, and in the space of a few months foundation models such as those behind ChatGPT have come to dominate the landscape but the technology is evolving so quickly governments, regulators and some companies are struggling to keep up. To combat this the UK government has announced a new “foundation model” taskforce, but some analysts say the UK is already behind much of the world, particularly on regulating this potentially game-changing technology.

The Taskforce will explore the benefits and impact of large language models (Photo: Zapp2Photo/Shutterstock)
The Taskforce will explore the benefits and impact of large language models. (Photo: Zapp2Photo/Shutterstock)

Reporting to the prime minister and the secretary of state for science, innovation and technology, the taskforce will be chaired by Matt Clifford from the UK Advanced Research and Invention Agency (ARIA) along with experts in the technology from across industry and academia. The group have been challenged to report on ways foundation models, including large language models and chat tools, can be used to grow the economy, create jobs and benefit society.

Foundation models including those used for generative AI, drug discovery and in chat tools like ChatGPT or Bing came to the forefront late last year when OpenAI released ChatGPT but most of the largest models are created and held by a small number of large companies. This includes the release of GPT-4 by OpenAI which can also process image inputs and Google Cloud confirming it would open its 540-billion PaLM model to developers.

There have been growing calls for the UK to develop its own “national level” large language model to take on the likes of OpenAI and ensure that the country’s start-ups, scale-ups and enterprise companies can compete with US and Chinese rivals on AI and data.

Speaking to a group of MPs last month, BT’s chief data and AI officer Adrian Joseph said the UK was in an “AI arms race” and that without investment and government direction, the country would be left behind. Joseph, who also sits on the UK AI Council, said: “I strongly suggest the UK should have its own national investment in large language models. There is a risk that we in the UK will lose out to the large tech companies and possibly China. There is a real risk that unless we leverage and invest, and encourage our start-up companies, we in the UK will be left behind with risks in cybersecurity, healthcare and everywhere else.“

A need for UK Large Language Models

The new taskforce is part of the wider integrated review which is bringing together leading experts to boost UK foundation model expertise, seen as an essential component of AI. The first priority for the taskforce will be to present a “clear mission focused on advancing the UK’s AI capability”.

It isn’t clear what format this will take or what it is expected to produce, but analysts hope it will include calls for a sovereign large language model. This could also add to calls for more targeted investment in compute power. The government recently published a report that recommended improvements to the UK compute infrastructure including on exascale computing and AI capabilities.

Content from our partners
An evolving cybersecurity landscape calls for multi-layered defence strategies
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways

Clifford and his team will have to explore ways large language models can be used in healthcare, government services and economic security among other areas, including ways to support the wider government technology framework published recently.

Science, Innovation and Technology Secretary Michelle Donelan said in a statement that foundation models are “the key to unlocking the full potential of data and revolutionising our use of AI”. Citing the success of OpenAI’s ChatGPT, she said it would provide “unparalleled insights into complex data sets, enabling smarter, data-driven decision making”.

She continued: “With opportunity comes responsibility, so establishing a taskforce that brings together the very best in the sector will allow us to create a gold-standard global framework for the use of AI and drive the adoption of foundation models in a way that benefits our society and economy.”

Mike Wooldridge, Director of Foundation AI Research at the Alan Turing Institute welcomed the move and said it was the first step towards the UK creating its own sovereign AI capability, which is something the Turing institute has been advocating for over the past year.

“There has been a rapid growth in demand for AI and data science resources over the past decade. The technology is evolving at such a rapid rate that the UK hasn’t been able to keep up,” he explained. “This taskforce will be crucial to ensuring emerging technologies are developed for public good. The Alan Turing Institute leads the UK on this issue, and we look forward to working with the taskforce to help make a sovereign AI a reality.”

A need for regulation

What isn’t clear from the announcement is how this will fit into regulation of AI. The UK has previously announced it would take a sector-by-sector and risk-driven approach to the regulation of AI, but like the EU with the EU AI Act it isn’t clear how it would approach foundation models which are more general purpose rather than being driven by sector use.

Natalie Cramp, CEO of data consultancy Profusion, welcomed the launch of a new taskforce to look at the impact of foundation models but said the “government is very much playing catch-up with other countries,” adding that the EU will soon finalise wide-ranging rules governing developments in AI via the EU AI Act.

“Without clear direction and clarity on how the government will legislate AI, businesses face a lot of uncertainty which, at best, curtails innovation and, at worst, can leave undesirable applications of AI unchecked,” Cramp says, arguing that there needs to be regulation and guidelines as part of the investigation into foundation AI.

AI, particularly generative AI has developed rapidly over the past 12 months and quickly gone from a novelty or fringe use to becoming part of future business plans and pitch documents. Salesforce has announced EinsteinGPT, Microsoft is using it in its entire product range and Google recently announced plans to bring generative AI to its Workspace platform including Docs and Gmail.

This rapid development has left regulators and governments scrabbling to catch up, warned Cramp. “We need to, as a society, think very carefully about how we want AI to shape how we all live. There is a huge capacity for misuse – both intentional and accidental,” she added.

This includes ensuring data isn’t biased, inaccurate or incomplete as AI can amplify any existing issues in that area. “We need to look very carefully about how LLMs are created and the results are applied,” Cramp said. “We will soon be at a stage where generative AI will be able to perfectly mimic a human via audio and visuals. You do not have to think too hard to think how this could be misused. 

“Ultimately, I believe that a new rulebook for AI is not going to completely solve these problems. AI is developing too quickly for legislation to anticipate every innovation and application. What we need is an ethical framework that organisations can abide by which provides guardrails that shape how we use data and AI. If the taskforce can focus on the ethical implications of AI and how standards can be created that govern its development, it will be a very worthwhile endeavour.”

James Gill, partner and co-head of Lewis Silkin’s Digital, Commerce and Creative team, said: “With the launch of the even more powerful Chat GPT 4 this week, all eyes remain on AI, so the announcement is timely. When the UK Government called for evidence about the regulation of AI last year, its plan suggested it might be more laissez-faire than the rather more strict EU AI Regulation, and follow the OECD six principles. So, the reference to the EU legislation is interesting and may indicate a possible change of approach.”

Gill believes the government “may have recognised that a divergent UK approach may not be feasible, as with much of Brexit, in relation to organisations developing or deploying AI either across borders, or with users in both geographical areas, as those organisations will, in any event, need to comply with the AI Act in respect of EU operations”.

And he warned: “The government will also need to tread carefully to ensure it protects individuals’ rights, assuming it wishes to maintain a UK data ‘adequacy’ decision from the EU. The development also comes against the backdrop of the House of Common’s Science and Technology Select Committee inquiry on governance of AI in the UK, which is yet to report.”

Read more: This is how GPT-4 will be regulated

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU