Cybersecurity guidelines for developers working on new AI systems have been unveiled by the UK and 17 of its allies. It is the latest move by the government to attempt to take a leading role in the debate around AI safety, following the international summit held at Bletchley Park earlier this month.

The NCSC has launched new cybersecurity guidelines for AI developers. (Photo by T. Schneider/Shutterstock)

The guidelines aim to raise the cybersecurity levels of artificial intelligence and help ensure that it is designed, developed, and deployed securely, the UK’s National Cybersecurity Centre (NCSC) said.

They will be officially launched this afternoon at an event hosted by the NCSC, attended by 100 partners from industry and the public sector.

New cybersecurity guidelines for AI development launched

The Guidelines for Secure AI System Development have been developed by the NCSC and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world.

They will help developers of any systems that use AI to make informed cybersecurity decisions at every stage of the development process, the NCSC said. This will include systems that have been created from scratch and those built on top of tools and services provided by others.

It is hoped they will help ensure developers take a “secure by design” approach to building AI systems, with cybersecurity baked into new designs.

NCSC CEO Lindy Cameron said: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

The guidelines are broken down into four key areas – secure design, secure development, secure deployment, and secure operation and maintenance – each with suggested behaviours to help improve security.

CISA Director Jen Easterly said the guidelines are a “key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”

Easterly added: “The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology evolution.

“This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border collaboration in securing our digital future.”

UK attempts to lead conversation on AI safety

Alongside the UK and US, countries endorsing the guidelines include Germany, France and South Korea.

They build on the outcomes of the international AI safety summit, convened by the UK government at Bletchley Park and attended by government officials and the world’s leading technology vendors and AI labs.

The event saw the Bletchley Declaration agreed, with signatories pledging to work together closely on AI safety. Developers such as OpenAI and Anthropic also agreed to submit their next generation, or frontier, AI models for inspection by the UK’s recently announced AI safety institute. Prime Minister Rishi Sunak said the institute would be the first of its kind in the world, though the US government is also setting up a similar body.

Technology Secretary Michelle Donelan said: “I believe the UK is an international standard bearer on the safe use of AI. The NCSC’s publication of these new guidelines will put cyber security at the heart of AI development at every stage so protecting against risk is considered throughout.”

Read more: Rhysida claims it hacked the British Library