Funding for regulators overseeing AI in the UK is “clearly insufficient,” a parliamentary committee has warned. In its latest report, the House of Commons Science, Innovation and Technology Committee said that the £10m allocated to regulators by the current Conservative government was insufficient to meet the scale of the challenge posed by the technology to society and the wider economy. Despite this, the committee endorsed the broad thrust of government attitudes toward permissible uses for AI in the UK, which has been to let individual regulators arbitrate challenges related to the technology specific to their sector.
“The next Government must announce further financial support, agreed in consultation with regulators, that is commensurate to the scale of the task,” said the report. “It should also consider the benefits of a one-off or recurring industry levy that would allow regulators to supplement or replace support from the Exchequer for their AI-related activities.”
Legislation for AI in the UK possible, says committee
In a wide-ranging report, the committee also said that the UK government should be prepared to introduce legislation governing AI in the eventuality that gaps emerge in the capacity of regulators and industry to overcome challenges themselves – gaps partly identifiable, it added, in quarterly reviews of the efficacy of its overall strategy laid out before parliament. Moreover, it said, the government should also “provide further consideration of the criteria on which a decision to legislate will be triggered, including which model performance indicators, training requirements such as compute power or other factors will be considered.”
The committee welcomed other aspects of the government’s strategy for regulating AI in the UK, including its establishment of the Incubator for Artificial Intelligence (i.AI) and the AI Safety Institute. “It is a credit to the commitment of those involved that the AI Safety Institute has been swiftly established, with an impressive and growing team of researchers and technical experts recruited from leading developers and academic institutions,” it said.
In a note of disquiet, however, the committee said that it was concerned by suggestions that the institute had not been given access to beta versions of AI models to perform pre-deployment safety testing. “The Bletchley Park Summit resulted in an agreement that developers would submit new models to the AI Safety Institute,” said its chair, Rt. Hon. Greg Clark. “We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing.”
Deepfake concern for the general election
The committee also repeated many concerns about how to regulate AI in the UK articulated by industry experts over the past decade: namely, that model developers and deployers should be rigorously held to account over how they have mitigated bias and inaccuracy in the outputs of their AI products. “Nobody who uses AI to inflict harm,” meanwhile, “should be exempted from the consequences, whether they are a developer, deployer or intermediary. The next Government, together with sectoral regulators, [should] publish guidance on where liability for harmful uses of AI falls under existing law.”
The government should also pay careful attention to the proliferation of deepfakes, said the report, as campaigning for the general election heats up. If online platforms are found to be slow to remove malicious deepfakes targeting politicians (inclusive of committee members, presumably), it added “regulators must take stringent enforcement action – including holding senior leadership personally liable and imposing financial sanctions.”