The British Standards Institution (BSI) has published a new international standard containing guidance on the safe and responsible implementation of AI systems in corporate workflows. Otherwise known as BS ISO/IEC 42001, the standard has been published in response to research conducted by the BSI across nine countries indicating that three-fifths of adults support the creation of international guidelines for the safe use of AI.
“The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology,” said the BSI’s chief executive, Susan Taylor Martin. This, in turn, Martin continued, “offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world”.
New BSI AI standard provides “impact-based framework”
The BSI is the UK’s national standards body and representative at the ISO, an independent non-governmental association responsible for agreeing common standards for the use and design of technologies globally. BS ISO/IEC 42001 provides what the BSI characterises as an “impact-based framework” for the implementation of AI management systems. The standard, it continued, “provides requirements to facilitate context-based AI risk assessments… and controls for internal and external AI products and services”. The overall aim, said the BSI, was to encourage a culture of participation and trust in companies seeking to implement new AI systems.
The launch of the new standard, which was originally published in December 2023, follows a poll by the BSI revealing widespread support for international guidelines on the safe implementation and usage of AI systems. The survey of 10,000 adults in nine countries found that 60% of adults would support the creation of international guidelines on the safe and responsible use of AI systems. Two-fifths, meanwhile, already used AI in their workplace. Another two-thirds of respondents expected the use of the technology to become widespread within their industry by the end of the decade.
In a separate statement reacting to the publication of the new standard, Databricks’ vice-president of generative AI, Naveen Rao, welcomed thoughtful new guardrails for the technology – but warned against overregulating LLMs and open-source AI. “Productivity enhancements from AI can be seen in many forms, like improving customer and employee experiences,” said Rao, applications that, in his view, do not appear to be negatively impacting jobs. “Any new regulation must not be at the cost of stifling smaller start-ups and academic researchers from being able to do their work and research. The more we understand these models, the more we can share ideas on how to safely shape a future with AI.”
The launch of BS ISO/IEC 42001 follows a flurry of AI regulatory developments in the UK. These include the Information Commissioner Office’s (ICO) announcement of a new consultation into how AI developers are complying with data protection law and news that the UK government may soon publish the benchmarks against which it will pass new legislation regulating the technology.