View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
March 16, 2023updated 17 Mar 2023 9:12am

Government backs UK AI regulatory sandbox

The new AI sandbox will be multi-regulatory and designed to help developers ensure products are safe, transparent and ethical.

By Ryan Morrison

A new regulatory sandbox for artificial intelligence models, tools and systems is to be introduced in the UK after the government backed proposals in yesterday’s Budget. The multi-regulator sandbox will “allow innovators and entrepreneurs to experiment with new products or services under enhanced regulatory supervision without the risk of fines or liability”.

Former UK chief scientific advisor Sir Patrick Vallance recommended setting up an AI sandbox in a review of innovation-related policy (Photo by Adrian Dennis-WPA Pool/Getty Images)

This isn’t a new concept, with sandboxes widely used in other areas of the economy including finance, energy systems and data security as a way to test the boundaries of both regulation and technology. Several regulators already run some form of sandbox including the Information Commissioner’s Office (ICO) with data security and innovation.

As part of the 2023 Spring Budget announcement, Chancellor Jeremy Hunt confirmed the government would support recommendations on AI regulation made by Sir Patrick Vallance in his “pro-innovation regulation for digital technologies” review.

It is hoped a sandbox will be developed and operational across different regulators within six months, mirroring the UK approach to regulating AI by putting the emphasis on individual regulators and use case rather than taking a broad approach.

“Innovators can often face regulatory challenges in getting new, cutting-edge products to market,” the government wrote in its response to the report by Vallance. “This is particularly true when a technology’s path to market requires interaction with multiple regulators, or when regulatory guidance is emergent. Regulatory sandboxes provide a space for innovators to test their new ideas and work with regulators on the application of regulatory frameworks.”

AI regulatory sandbox: a multi-agency approach

The engagement will be through the Digital Regulation Cooperation Forum. Vallance wrote that effective AI regulation “requires a new approach from government and regulators” that is agile, expert-led and able to provide clear guidance quickly to industry. The argument for it being multi-regulator is to reduce inconsistencies in regulatory responses and create a more coherent approach.

He set out three core principles to guide the development of the sandbox including a “time-limited opportunity” for companies to test propositions on real consumers, a focus on areas where “underpinning science or technology” is at a stage where a major breakthrough is feasible and a way to solve “societal challenge” where the UK could be a world leader.

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

While the exact structure and design of the sandbox won’t be known until the white paper is published, Vallance set out his vision including targeted signposting both national and international with clear eligibility criteria, clear application deadlines and accountability and transparency with a consideration of ethics, privacy and protection of consumers. “A sandbox could initially focus on areas where regulatory uncertainty exists, such as generative AI, medical devices based on AI, and could link closely with the ICO sandbox on personal data applications,” he wrote.

The ICO wrote in response to the government support, that it expects to have a critical role in helping innovators develop safe and trustworthy products. “But in a fast-moving area like AI, there is always more that can be done, and we welcome the focus this report will bring. We’ll continue prioritising our work in this area – including guidance we’re working on including on personal data processing relating to AI as a service – and look forward to discussing the recommendations within the report with our DRCF partners and Government.”

Ryan Carrier, founder and CEO of AI ethics campaign group ForHumanity told Tech Monitor: “AI regulatory sandbox are critical tools for bridging a trust gap between regulators and the providers of AI tools to the marketplace.” He added that most AI providers are multiple steps away from being compliant with even basic forms of regulation.

“These… will allow groups like ForHumanity, which have already built and submitted certification schemes to the UK government, to allow us to prove the methods, procedures and documentation that compliance requires without risk of noncompliance at the outset. It is a great step way to build towards robust compliance.”

Anita Schjøll Abildgaard, CEO and co-founder of AI platform Iris.ai welcomed the news and said it could lead to a frenzy of innovation. But she warned against leading with the technology then finding the use-case for it. “It should be the other way around – using AI, mindfully applied, to solve real-world well understood problems,” she said. “Instead of getting caught up in the generative AI craze that will dominate 2023, businesses and large tech corporations should consider the AI technologies that will drive real value, rather than driving headlines. Bigger does not always equal better.”

Clear policy on intellectual property

Vallance also called for a “clear policy position” on the relationship between intellectual property law and generative AI to ensure innovators and investors can have confidence in the technology. This includes enabling mining of available data, text and images and utilising copyright and IP law to protect IP output. “In parallel, technological solutions for ensuring attribution and recognition, such as watermarking, should be encouraged, and could be linked to the development of new international standards in due course.”

Intellectual property lawyer Cerys Wyn Davies of Pinsent Masons said there was a fine line between making development of AI easier and recognising intellectual property rights. “It is clear from the government’s endorsement of Sir Patrick Vallance’s recommendation that it is seeking to deliver certainty around the relationship between intellectual property law and generative AI,” she said.

“This is key both to encourage the development of AI and to encourage the other creative industries. Certainty that pleases everyone, however, is going to be difficult to achieve as has been highlighted by the backlash against proposals by the UK Intellectual Property Office to expand the scope of the text and data mining exception that exists in copyright law to help AI developers train their systems.”  

General partner at deep tech venture capital company OpenOcean, Ekaterina Almasque told Tech Monitor being able to access a high volume of high-quality data and access it without transgressing IP law or individual privacy is essential when training AI models. “If the new AI sandbox results in changes that make it easier for AI start-ups to train their models and bring solutions to the enterprises that need them, then that will have a positive effect on the UK start-up scene in the long run,” she said.

“However, what we require are clear commitments. Start-ups need to be able to deliver their products to the market with speed in their early stages, and while steps to clear up regulatory uncertainty are welcome, they’re not concrete yet.”

Bola Rotibi Chief of Enterprise Research at CCS Insight described the sandbox as a “welcome and in of itself smart move” given how fast AI is advancing. She added: “The EU’s AI Act sees their use as enabling a more agile approach to innovation and regulation in the fast-moving tech sector. That the UK has made explicit provisions for supporting AI Regulatory Sandboxes in the budget is recognition of the internationally competitive battle ground that AI presents. But it is also acknowledgement of the constraints to innovation that a highly regulated UK market presents.”

This allows the UK to play on its regulatory maturity, offering “opportunities that could see the sandbox delivering more appropriately innovative and supportive regulations for AI systems and applications quicker. That said, the UK is not the first off the AI Regulatory Sandbox starter block with Spain and the European Commission having been the first to put their pilot AI regulatory sandbox in the field in June 2022 reporting on its findings in the second half of 2023. An ironic outcome for those who believe the EU poses a drag on a nation’s ability to progress innovatively and take advantages of opportunities.”

Read more: This is how GPT-4 will be regulated

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU