The Alan Turing Institute has launched a new AI Standards Hub with the aim of ensuring as many groups as possible have input in the way artificial intelligence develops in the UK and the guidelines it is held to in the future.
Run in partnership with the government’s Department for Culture, Media and Sport (DCMS), the British Standards Institution (BSI) and the National Physical Laboratory (NPL), the hub will include documentation of the multiple standards being developed for all aspects and use cases for AI in the coming decades, and run regular events with industry, academia and civil society.
The aim is to make it easier for experts in healthcare, transport, finance and government to work together and share resources after a survey found that many organisations and companies understood the need for standardisation when it comes to AI but lacked the knowledge necessary to implement standards.
It forms part of the wider government National AI Strategy and according to Damian Collins, parliamentary under-secretary of state for tech and the digital economy, the development of standards will ensure the UK remains a global AI superpower by driving innovation and adoption.
He told guests at the launch of the hub that these standards will eventually lead to a thriving assurance industry that could be worth billions of pounds per year to the UK economy as companies look to protect themselves and customers when using AI tools.
“AI is not an industry or single product, it is an enabler that drives every aspect of modern life,” Collins said. “The Hub will work to improve AI standards adoption and contribute to their development internationally, bringing an additional layer of confidence to those using and engaging with the technology.”
AI Standards Hub to drive development of regulations
Dr Florian Ostmann from the Alan Turing Institute and one of the architects of the AI Standards Hub said it would “ensure that industry, regulators, civil society and academic researchers have the tools and knowledge they need to contribute to the development of standards”.
“Standards mean a wide variety of things,” he told Tech Monitor, “but the standards we are talking about are formal standards that have a long history in other areas of engineering. It is everything from paper and plugs to WiFi and GIFs.”
The hub will perform two key functions, says Ostmann. Firstly it will act as a tool for organisations to understand the risks of specific AI systems and offer shared procedures, metrics and performance criteria for managing these risks. Secondly, it will help drive adoption of the technology.
Standards are set to play a significant role as the government takes a risk-driven approach to the regulation of AI, focusing on the output rather than the technology itself. This will require a set of best practices to be developed to ensure that AI-related products, processes and services perform as intended.
To achieve this level of consensus among different stakeholders, which will include industry, academia, government and civil society organisations, the Alan Turing Institute has developed an online platform that lets users track all standardisation efforts and policy documents.
There will also be a series of events, talks and community-building activities to discuss the challenges of ensuring AI is trustworthy. “This will include encouraging collaboration in the AI community, allowing for more coordinated contribution,” a spokesperson for the Turing Institute said.
International co-operation required on AI standards
“We don’t want to hold back AI or hold up the technology, we just want to take a responsible approach to it including its impact on work and good work,” said Anna Thomas, director Institute for the Future of Work during a panel event at the launch of the hub. “You don’t have to wait for the standards, you just ensure you are aware of new standards being developed.”
There are already more than 300 AI standards developed or in the works that require international collaboration to ensure companies don’t have to create different versions of technology and policies for different markets, the launch event was told.
“Generally we’ve seen very strong support from industry and other stakeholder groups,” said Ostmann. “AI is used in a wide range of applications and areas of the economy with a wide range of ethical and financial concerns, so a variety of stakeholder representatives is vital to ensure wide adoption.”
The Hub has been built around four pillars, an observatory with multiple interactive libraries featuring AI standards under development, collaboration with the community, knowledge and training for those working in AI and further research and analysis as standards evolve.
Scott Steedman, director-general for standards at BSI, said some standards are already under active development around the use of AI, including the AI management standard ISO/IEC 42001 which allows companies of all sizes to take advantage of AI technologies in a responsible way.
Peter Lee, business development and consulting manager at the BSI said at the hub launch event that standards are a document that defines best practices in a certain area and provide trust for those using technology built using those standards. “They are written by committees of stakeholders with rounded involvement to ensure they are adopted by the market,” he said. “Consensus is important to ensure adoption.”
Read more: China’s generative AI revolution is only just beginning
Homepage image by Gorodenkoff/Shutterstock