The UK AI regulatory framework will be more clearly defined by spring, it has emerged, after the government announced that regulators will define the risks and rewards afforded by AI in their respective sectors in April. Watchdogs have been asked to publish their respective approaches to the technology by 30 April. The announcement is one of the UK government’s responses to the consultation process for its AI white paper, published last year, in which it defined a “hands-off” approach to regulating the technology.
“The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development,” said the Secretary of State for Science, Innovation, and Technology, Michelle Donelan. “By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”
UK AI regulation accompanied by funding infusion
The UK government’s order to regulators was accompanied by an additional funding infusion of £10m to help them prepare to address AI-related harms in the long term. This will be dwarfed, however, by the £90m earmarked for the creation of nine new UK AI research hubs designed to “support British AI expertise in harnessing the technology across areas including healthcare, chemistry and mathematics.”
This is accompanied by £2m for the Arts & Humanities Research Council (AHRC) to support schemes aiming to define what responsible AI looks like in practice for policing, the arts and education, and £19m for the UKRI Technology Missions Fund to disburse into 21 projects to divine new AI and machine learning applications to drive broader economic productivity. The government added that it will also launch a steering project later this year to “support and guide the activities of a formal regulator coordination structure” within Whitehall.
Today’s announcement was the latest in a wider push by the government to define the UK as a world leader in AI. Though it previously stated in its white paper and in subsequent leaks that it intended to adopt a relatively hands-off approach to legislating on the technology, the Conservative administration has been aggressive in seeking to define international norms around advanced AI models. This campaign eventually culminated in the world’s first global summit on AI safety at Bletchley Park last November.
AI white paper process continues apace
This morning’s news is also a striking contrast to the impression of the UK’s AI posture formed by the House of Lords’ Communications and Digital Committee, which was decried in a report last week as lacking in ambition and too focused on safety. “We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical,” said Baroness Stowell of Beeston, the committee’s chairperson. “We must avoid the UK missing out on a potential AI gold rush.”
For its part, tech industry association techUK broadly welcomed this morning’s announcement, urging the government to proceed apace with implementing its AI agenda. “We now need to move forward at speed, delivering the additional funding for regulators and getting the Central Function up and running,” said the association’s chief executive, Julian David. “Our next steps must also include bringing a range of expertise into government, identifying the gaps in our regulatory system and assessing the immediate risks.”
The Ada Lovelace Institute, meanwhile, argued for greater legislative clarity from the Sunak administration. “The government should be given credit for evolving and strengthening its initially light-touch approach to AI regulation in response to the emergence of general-purpose AI systems,” said its associate director, Michael Birtwistle. However, he added, the “framework being proposed cannot function effectively without legislative support. Only hard rules can incentivise developers and deployers of AI to comply and empower regulators to act.”