View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
March 29, 2023updated 30 Mar 2023 9:51am

The UK wants more AI innovation like ChatGPT. Experts say experiments should stop

As the UK promotes a 'light touch' approach to AI regulation, leading industry figures call for a pause on the development of advanced systems.

By Ryan Morrison

The UK is set to adopt a “pro-innovation” and light-touch approach to regulating artificial intelligence according to a new government white paper released on Wednesday. But the white paper was launched on the same day leading industry figures signed an open letter calling for the development of advanced AI systems to be paused while the ethical implications of technology are considered, setting out a contrasting vision for the future of models like OpenAI’s ChatGPT-4, the technology behind ChatGPT.

The new AI white paper aims to find a balance between consumer safety and the benefits to the economy (Photo: LeoWolfert/Shutterstock)
The new AI white paper aims to find a balance between consumer safety and the benefits to the economy. (Photo by LeoWolfert/Shutterstock)

The UK approach outlined in the white paper differs from the EU AI Act where firm legislation is being used to regulate and control the use of the technology in the most “at risk” areas such as healthcare and law. It also runs counter to the arguments put forward in the open letter, co-ordinated by the Future of Life Institute think tank, and signed by the likes of Elon Musk and Apple co-founder Steve Wozniak.

The group is calling on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”, the recently released multi-modal foundation model from OpenAI that Microsoft Research says shows “the early spark of being AGI”, or artificial general intelligence, a representation of generalised human cognitive abilities in software.

“This pause should be public and verifiable, and include all key actors,” warns the group, which also includes signatures from AI researchers across universities including Harvard and companies like Google-owned AI lab DeepMind. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” they go on to declare.

How to regulate AI

So what is the best way to approach AI regulation? The problem, says Benedict Macon-Cooney, chief policy strategist at the Tony Blair Institute, is that the kind of pause proposed in the Future of Life Institute Letter is unlikely to happen.

With this in mind, governments need to engage deeply with AI developers as their work increases in complexity, he says. “The importance of this technology means that government needs to engage deeply with those at the frontier of this development,” Macon-Cooney argues. “OpenAI’s Sam Altman, for example, has suggested that government sits in the firm’s office – something which a forward-thinking government should take up to aid understand and shape thinking.”

Macon-Cooney believes governments need to build up expertise and technical capabilities around foundation and large language model AI, something the UK announced as part of the Spring Budget, including a new taskforce designed to enable and understand those technologies in the UK market. “We are at the beginning of a new era, which will have an impact on health, education and jobs,” he says. “This will result in displacement, but it will also shape new opportunities. Government needs to help guide this future.”

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

Sector-by-sector regulation of AI

As part of the regulation of AI white paper all of the UK’s existing regulators would be responsible for regulating the use of AI in their respective sectors. There would be “multi-regulator” tools including a sandbox, guidelines and a framework, but from health to energy, the regulators would each be responsible for establishing standards and guidelines for operators in their area of the economy.

It also regulates the “use not the development” of artificial intelligence tools, which works for more general purpose AI where the use at time of development is unclear. Adam Leon Smith, CTO of AI consultancy Dragonfly and part of the UK delegation to the EU on AI standards welcomed the UK approach and said the government “should wait a few months before deciding what, if anything, it should do about generative AI” such as ChatGPT, as it currently isn’t clear which technologies will gain traction.

“The UK is already providing regulators guidance in the form of technical standards,” Leon Smith says. “Although this is also the intent in the EU, they are moving much more slowly.” This, he says, creates “regulatory uncertainty and stifles innovation.”

The lack of any significant mention of generative AI, foundation models and other forms of general purpose AI in the UK government white paper has been criticised by groups like the Ada Lovelace Institute.

The white paper does suggest individual regulators will be able to decide how to regulate LLMs, including issuing specific requirements for developers and deployers to “address risks and implement the cross-cutting principle” which could include transparency requirements on data used to train the model. “At this point, it would be premature to take specific regulatory action in response to foundation models including LLMs. To do so would risk stifling innovation, preventing AI adoption, and distorting the UK’s thriving AI ecosystem,” it says.

A need for ethical AI by design

Dr Andrew Rogoyski from the University of Surrey’s Institute for People-Centred AI told Tech Monitor the pro-innovation approach was laudable but the lack of an overarching regulator and lack of strong controls on the use of AI means the country is out-of-step with the US, Europe and China.

“We need a central regulator for AI technology, partially because the individual regulators don’t currently have the individual skills but mainly because AI regulation needs to be joined up across sectors, especially since many AI providers operate across different sectors and don’t want to find themselves operating the same technology under different regimes in different sectors,” Rogoyski says.

“The pace and scale of change in AI development is extraordinary, and everyone is struggling to keep up. I have real concerns that whatever is put forward will be made irrelevant within weeks or months,” he said.

Taking a different approach to those much larger markets could be costly for the AI sector, says Tom Hartwright, partner at law firm Travers Smith. “This flexible approach has seen real success previously with a proportionate and innovative approach to the regulation of fintechs being hailed as one of the key reasons the UK is a market leader in the sector, Hartwright says. “The UK’s approach to AI regulation will, however, have to consider the wider global context where larger markets such as the US and EU have the power to set industry standards as we have seen with privacy and the EU’s GDPR.”

Ryan Carrier, CEO of ethical AI campaign group ForHumanity, which has produced audit criteria for the use of AI, said caution was important. “It is time that corporations wake up and proactively embrace governance, oversight, and accountability for these tools,” he says. “Corporations exist at the choice of humans to benefit society, not to experiment on us, ignore the harms and respond with ‘thank you for participating in our test, we will make it better next time’.”

He cited the recent ChatGPT privacy breach as a need for better enforcement and regulation demand, particularly around the use of existing regulations such as GDPR. “They are not enforcing the rules effectively enough,” said Carrier, adding that “ForHumanity insists on mandatory independent audits because they provide a proactive, independent review of compliance in advance of harms being committed, rather than relying upon reactive enforcement.”

Lack of enabling legislation for AI regulation

John Buyers, head of AI at law company Osborne Clarke said the new white paper doesn’t actually add much to what was already revealed last summer when the government first set out AI regulation plans. He says that, unlike the EU AI Act, no specific categories of AI will be banned and no new laws will be introduced to give regulation strength. “Instead, all the detail will be devolved to the existing body of UK regulators within their existing remits and using their existing powers,” Buyers says.

This, he says suggests the government is still in the phase of defining the problem and hints that the UK will lean towards becoming a “giant sandbox for AI”, seeing whether light-touch regulation is the right approach and making the UK a place to “foster the development of AI”.

Over time, says Buyers, the government will monitor what falls through the gaps in the regulatory regime and whether so many regulators being in play will lead to it becoming cumbersome and “start to damage innovation”.

But on the subject of general AI and whether there should be tighter oversight, or even a “pause” on development, it is probably already too late, argues Michael Queenan, CEO of UK technology company, Nephos Technologies. “Unfortunately I think the horse has already bolted, and you can’t ask commercial businesses to stop,” he says. “It has never happened in human history, so why would it happen now?”

Comparing it to trying to stop Henry Ford from building a car or George Stevenson from developing steam trains, Queenan says the impact is similar to the size of the Industrial Revolution. “People have been talking about the digital revolution for years now, but realistically not a lot actually changed,” he says. “The internet revolution fundamentally changed the way people interacted, the AI revolution will change how companies will operate.”

Read more: UK AI regulation white paper dodges ChatGPT questions

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU