The UK government will soon publish thresholds for the passage of new laws governing AI. According to the Financial Times, which broke the story, these new “tests” will define when and the extent to which the government will pass new legislation on the appropriate use of the technology. Some of the scenarios being contemplated include the possibility of major AI developers failing to abide by commitments to develop safe systems, or if the UK’s AI Safety Institute does not succeed in identifying risks in a new application that subsequently proliferate after its release.
UK government’s ‘pro-innovation’ approach to AI
The publication of these tests will be in keeping with the UK’s light-touch approach to regulating AI. This was confirmed in a statement published by the government in November. “We will take action to mitigate risks and support safe and responsible AI innovation as required,” it said, but would maintain a “pro-innovation approach” in consultation close consultation with civil society and industry. This philosophy will also run through the “tests” to pass AI legislation to be proposed by the UK government, according to the FT, with the proviso that any new laws would not impair innovation without just cause.
It is understood that the tests are set to be published as part of the consultation process for the government’s AI white paper, published in March 2023. The paper was not without its critics. According to the University of Birmingham’s Professor Karen Yeung and PhD candidate Emma Ahmed-Rengers, the document was an “inadequate basis for sound policy, let alone the foundations of an effective and legitimate regulatory framework that will serve the public interest”. Others, meanwhile, have pointed out that the gauntlet of regulating AI in the UK has begun to be taken up by sectoral regulators, with watchdogs including Ofcom and the Information Commissioner’s Office beginning to conduct algorithmic audits in their respective areas of jurisdiction.
UK active on AI norms internationally
Another prospective threshold for legislative action by the UK government on AI would be if major industry players were not abiding by previous commitments to create safe and transparent systems. Extracting these pledges from the likes of OpenAI, Microsoft and Google has been a policy priority for the UK, which has proved especially active on the global stage in attempting to define international norms on AI. Its organisation of an international AI safety summit last year resulted in the Bletchley Declaration, where nations including the UK, US, India and China agreed to collaborate on researching the long-term risks and rewards of the technology.
When approached for comment, the UK government’s Department of Science, Innovation and Technology refused to confirm whether the criteria for passing future AI legislation would be published imminently. “We set out our pro-innovation approach to regulating AI in our white paper last year and are working closely with regulators to make sure we have the necessary guardrails in place – many of whom have started to proactively take action in line with our proposed framework,” a spokesperson told Tech Monitor. “As the Technology Secretary said in December, we want to make sure we get this right and we will publish our response to the consultation shortly – in the meantime, we will not speculate on what or may not be included.”