View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
March 28, 2023updated 29 Mar 2023 11:50pm

New UK AI regulation white paper leaves ‘unanswered questions’ on ChatGPT

The document sets out broad principles for AI use, but makes minimal reference to general purpose models like GPT-4 or LaMDA.

By Ryan Morrison

A new white paper outlining how the UK government plans to regulate artificial intelligence has been published. It takes a “pro-innovation” approach that aims to build public trust while also making it easier for businesses to innovate around the technology. However, experts warn it still leaves the question of how to regulate tools like ChatGPT unanswered.

Experts warn that the new AI regulation white paper ignores tools like ChatGPT and has no legislation backbone (Photo: pathdoc/Shutterstock)
Experts warn that the new AI regulation white paper ignores tools like ChatGPT and has no legislative backbone. (Photo: pathdoc/Shutterstock)

The “light touch” approach will put the emphasis on individual existing regulators rather than see an overarching body created. Each regulator, from health to energy, will be tasked with creating “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”

There are five key principles introduced in the white paper from the Department for Science, Innovation and Technology (DSIT); transparency, robustness, explainability, fairness and accountability. There will also need to be a pathway for redress if someone is the victim of a harmful AI decision, the government warned.

The AI industry employs some 50,000 people contributing £3.7bn to the economy last year alone, DSIT said, with twice as many companies providing AI products than any other EU country.

The argument for a “pro-innovation” approach, beyond “growing the economy” is the potential benefits AI can bring to so many parts of society, helping doctors identify disease, and aiding farmers in making more sustainable and efficient use of their land. The government hopes to see the technology put into more widespread use.

It says it needs to balance this potential against the real risks posed by AI, particularly around privacy, bias and safety. For example, it could use a mismatched dataset when making a decision over a loan, or in education incorrectly mark a child as failing if it uses misleading training data.

Hundreds of millions of pounds will be invested directly by the government to improve the environment in the UK for AI to flourish safely but organisations are reticent to go “all in” due to the patchwork of legal regimes that increase the associated risks posed by failures or mistakes. To combat this, the government says it will avoid heavy-handed legislation which could stifle innovation, and rather focus on core principles for safety that will apply across the board.

UK AI regulation adapts to changing technology

This, it says, will ensure UK rules can more quickly adapt to a fast-changing technology and ensure the public is protected without placing an undue burden on companies. It will “empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

AI could make the UK a “smarter, healthier and happier place,” Michelle Donelan, Science, Innovation and Technology Secretary said. But with such a staggering pace of development rules are needed to make sure that happens safely. “Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow,” Donelan said. 

Regulators will issue practical guidance over the next 12 months to organisations developing or deploying artificial intelligence solutions, as well as providing risk assessment templates. There are currently no plans for legislation, but DSIT says that could happen to “ensure regulators consider the principles consistently”.

The government has already revealed plans for a taskforce to explore and build up the UK’s capabilities in foundation models such as large language model and image generation tools behind apps such as ChatGPT and Stable Diffusion. It has also announced a new £2m regulatory sandbox to test the boundaries of these solutions.

Michael Birtwistle, associate director (data and AI law and policy) at the Ada Lovelace AI research institute, said effective regulation is essential in realising UK AI ambitions, including providing legal clarity and certainty. It is also important, he said, to ensure the public has confidence in AI and that the regulations safeguard our fundamental rights. “Regulation isn’t a barrier to responsible AI innovation, it’s a prerequisite,” he declared.

Questions left unanswered about generative AI

While broadly welcoming the regulation and approach, Birtwistle expressed concern over obvious gaps that could leave certain harms unaddressed and that overall the regulations are “underpowered relative to the urgency and scale of the challenge”.

“The UK approach raises more questions than it answers on cutting-edge, general-purpose AI systems like GPT-4 and Bard, and how AI will be applied in contexts like recruitment, education and employment, which are not comprehensively regulated,” he said. “The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software. We’d like to see more urgent action on these gaps.”

Microsoft, Google, Salesforce and others have all recently announced plans to fully integrate large language model-based AI tools into high-profile software such as Microsoft 365 and browsers. Apps are also increasingly utilising AI in delivering content or providing support.

“Initially, the proposals in the White Paper will lack any statutory footing,” said Birtwistle. “This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future.”

There are also issues around funding, particularly in higher-risk areas such as health and law, with the Ada Lovelace Institute saying that without substantial investment in existing regulators, there will be issues with effective regulation of AI use. “The problems we have identified are serious but they are not insurmountable,” said Birtwistle. “Our previous research sets out a range of evidence and recommendations to ensure the UK’s regulatory framework for AI works for people and society. We will continue to use this to inform and work in dialogue with the Government and other groups to achieve this.”

Read more: OpenAI fixes ChatGPT bug that may have breached GDPR

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU