View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Government Computing
June 7, 2023

Is Rishi Sunak’s government ready to abandon its ‘light touch’ approach to AI regulation already?

As Labour calls for AI licensing, there are signs the government is wavering on its plans to police automated systems.

By Ryan Morrison

The government’s “light touch” approach to AI regulation is “not up to the task” according to Labour’s shadow digital secretary Lucy Powell, and she isn’t alone in that view. The SNP has also called for an urgent meeting with the government on the issue, and there are signs that Whitehall may be changing its tune, with Prime Minister Rishi Sunak set to meet with US President Joe Biden tomorrow to discuss a co-ordinated approach to policing large language models and other forms of AI.

Rishi Sunak speaks to the media as part of his trip to Washington. AI is high on the agenda (Photo: Number 10 Press Office)

Opinions vary wildly on how to regulate AI and how to mitigate the risks it presents, while avoiding reducing the positive impact the technology can have across a range of business sectors.

In its AI white paper published earlier this year, the UK government outlined a light-touch approach which would essentially leave the industry free to develop at will. It proposed that individual regulators would police the most “high risk” cases in their sector, rather than having an overarching AI regulator, as has been proposed elsewhere in the world.

But this is seen by many as a very risky approach. China has a requirement on foundation AI model developers to ensure output is “in keeping with Chinese values” and Europe has similar proposals in development. 

UK AI regulation plans at odds with other countries

The EU’s AI Act, the first comprehensive AI legislation in the world, is taking a risk-based approach, focusing on the functionality of the AI tool and its potential. This approach was designed before the rise of foundation AI models like GPT-4 and so there are now calls to introduce more expansive measures including reporting rules on training data and output risk assessments to ensure it is “in keeping with European values”.

Labour is calling for a similar approach in the UK. This could include licensing of the development of AI, not just regulation on how it is being used. Powell said there is “a global race going on to be the country of choice for the growth of AI, which the UK, with our leading AI sector, and strong reputation for regulation, is well placed to lead. But the government’s strategy is not up to this task, and already out of date after only two months.”

She added that AI isn’t just ChatGPT, it has been in development for a long time and is already widely implemented. “While we can’t yet see its full implications, it’s clear that they will be significant,” Powell said. “Many of these will be positive, improving productivity, public services, scientific discovery, but also have the potential to be seriously harmful if not properly regulated.”

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

There are signs the government may be wavering on its so-called “pro-innovation” stance. Sunak’s meeting with Biden in Washington is likely to see him pitch for a global approach to AI regulation. Such a global approach, it is argued, would ensure a level playing field and allow developers to know what they build would be applicable anywhere.

Sunak wants to position the UK at the heart of this worldwide AI regulatory movement, hoping to turn the setting of AI standards into a soft power tool that would also help sell the UK as a place to do business and develop cutting-edge technology. The problem is this is something every other country in the world is also trying to achieve. Japan recently announced it would remove any copyright rules from content used to train models as a way to boost its flagging AI sector.

Forming a new global AI organisation, akin to the International Atomic Energy Agency, may be only way the UK will be able to have a say on international standards. Since Brexit it has been excluded from key forums like the Tech and Trade Council, where countries like the US and Canada have started to discuss AI codes of conduct with the EU.

Global debate on AI standards

What form the final regulation takes, or whether it is set domestically or globally will come down to a series of negotiations with companies, between governments and with third sector organisations. The way this should work is as hotly debated as the legislation itself.

Mhairi Aitken from the Alan Turing Institute expressed concern over the language surrounding regulation changing to benefit Big Tech and the existing large AI labs. Sunak met with AI industry chiefs, including OpenAI CEO Sam Altman, last week, and Altman has been touring countries around the world meeting with policymakers.

Aitken believes any licencing introduced should be independent of Big Tech:  “We’ve seen in the last month the level of influence Big Tech is having on regulation and policy,” she says, explaining that the narrative around AI has changed, with a focus on future systems which only serves as a distraction.

“It troubles me that these discussions on regulation are not centred the voices of impacted communities,” Aitken explains. “Instead they are centred on the voices of Big Tech whose interest and motivation is to drive innovation and make money, rather than on the risks. They focus on the hypothetical rather than the real world implications and risks experienced today.”

ForHumanity is an organisation setup to licence and provide certification for AI developers. This is similar to the model to that being considered by Labour, and the organisation already creates certifications and training against the EU AI Act. CEO Ryan Carrier told Tech Monitor any licencing regime should apply across the board and be carried out by third-party non-government organisations. 

He said this should also apply to smaller companies, the “two people building an AI in their garage” types as they can also create tools that can do meaningful harm. “We encourage governments to create innovation hubs that can produce meaningful governance, oversight, and accountability for numerous SMEs in a leveraged fashion. Compliance, regardless of size, is important,” says Carrier.

Need to license AI that could be ‘harmful to human life’

For BCS, the Chartered Institute for IT in the UK, certification and licencing is a good idea but should also be backed up by a robust code of conduct. “We have already called for a register of technologists working on AI technologies in ‘critical infrastructure’ or which ‘could potentially be harmful to human life’,” explains Adam Leon Smith, chair of the BCS F-TAG advistory group.

“It is important to understand that we can’t just focus on the training for developers, or the certification of technology,” Leon Smith says. “We actually need to carefully control how we use AI. This is because risk mitigations like testing, informed consent, human-in-the-loop oversight and monitoring can only be implemented with a full understanding of the context of use. There’s no point regulating LLMs beyond transparency obligations, instead regulate people implementing them in particular contexts.”

He adds people want to regulate the technology but that isn’t the right approach. “Firstly, it changes too fast, and secondly it is the holistic system that matters,” he explains. “Looking at safety critical domains – it is not usually an individual ‘part’ or ‘component’ that is regulated – but the overall system.”

One of the key reasons for regulating such a technology is risk mitigation. This is something companies are keen to see happen as it will make deploying the technology safer. It is also supported by the insurance industry.  Rory Yates, global strategic lead at insurance platform provider, EIS says this should include a register of those working on AI. “I believe a sensible approach to license or certification is extremely beneficial for highly regulated industries like insurance, especially as they now require a clear shift in how they take responsibility for their customers.”

“Ensuring there are only accredited professionals developing and utilsiing these technologies will be one way to control who has access to this technology and how they are using it. Whilst also creating a positive labour market, one that has clear determinable standards for what “good” looks like,” he added.

Tech Monitor asked Labour how licencing might work in reality and was told there are a number of options being considered, not just licencing. The issue is more that there needs to be legislation in place and not just a light touch approach proposed by the Conservatives, a party spokesperson said.

Read more: AI safety: industry leaders warn of ‘extinction risk’ for humanity

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU