The UK government has announced new support for businesses seeking to develop and deploy safe and trustworthy AI products and services. In a statement published earlier today, the Department for Science, Innovation and Technology (DSIT) revealed the government’s launch of a new AI assurance platform that will provide firms with a “one-stop-shop” for information on how to lessen potential harms that might arise from their rollout of AI tools. 

“AI has incredible potential to improve our public services, boost productivity and rebuild our economy but, in order to take full advantage, we need to build trust in these systems which are increasingly part of our day-to-day lives,” said the DSIT secretary, Peter Kyle. “The steps I’m announcing today will help to deliver exactly that – giving businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise.”

AI assurance a vital market for UK, says government

The new platform – as yet unnamed by DSIT – will not only provide guidance but also contain practical resources for businesses seeking to deploy new AI tools. As such, it will set out “clear steps” for firms on how to carry out impact assessments and evaluations, as well as “reviewing data used in AI systems to check for bias, ensuring trust in AI as it’s used in day-to-day operations.” The government added that a new self-assessment tool will soon be launched to guide SMEs on how to implement “responsible AI management practices” within their organisations. 

“Drawing on key principles from existing AI-related standards and frameworks – including ISO/IEC 42001 (Artificial Intelligence – Management System), the EU AI Act, and the NIST AI Risk Management Framework – AI Management Essentials will provide a simple, free baseline of organisational good practice,” said the government in a report on the UK AI assurance market also published today. “In the medium term, we are looking to embed this in government procurement policy and frameworks to drive the adoption of assurance techniques and standards in the private sector.”

Measures broadly welcomed by AI safety experts

According to the government, the UK AI assurance sector currently includes 524 firms that employ 12,000 people and generate over £1bn for the wider economy. That market is likely to expand six-fold, it claims, within ten years, as more and more firms rely on external organisations to evaluate their AI models before deployment. 

The government’s announcements on AI assurance were welcomed by the Ada Lovelace Institute, especially as they pertained to reforms to public sector procurement. That area, said its associate director Michael Birtwhistle, has always tended to drive higher standards in cybersecurity. However, “[l]ocal government and the public sector more broadly will need support to ensure that the AI systems they buy and use are safe, effective and able to deliver widely shared benefits for people and society.” 

DSIT would also be wise to remember that the provision of further state support for AI assurance initiatives does not in or of itself provide the incentive for firms to deploy safe and trustworthy systems, argued Birtwhistle. “To be successful,” he said, “the assurance market will need to be complemented by legislation that incentivises safe, equitable and trustworthy development and use of AI.”

Read more: EC officially brings EU AI Act into force