View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

UK government launches AI cybersecurity codes of practice

The guidelines aim to bolster AI cybersecurity and establish a global standard for protecting models against threat actors.

By Greg Noone

The UK government is calling for views on a new set of voluntary guidelines for AI cybersecurity. The ‘AI Cyber Security Code of Practice’ will include recommendations for developers on how to best protect their AI products and services against possible breaches, sabotage or tampering. During a speech at the CYBERUK conference, technology minister Saqib Bhatti said the new guidelines would help to establish a global standard for AI cybersecurity and afford British businesses greater protections against cyberattacks. 

”We have always been clear that to harness the enormous potential of the digital economy, we need to foster a safe environment for it to grow and develop,” said Bhatti. “This is precisely what we are doing with these new measures, which will help make AI models resilient from the design phase.”

Several tabs on a laptop depicting AI company logos, used to illustrate a story about AI cybersecurity guidelines recently introduced by the UK government.
A new code of practice, now put out for consultation among industry leaders, aims at establishing tighter AI cybersecurity for models developed and used in the UK. (Photo by Shutterstock)

AI cybersecurity guidelines partially based off NCSC guidelines

Developed by the Department for Science, Innovation & Technology (DSIT) and based on the National Cyber Security Centre (NCSC) guidelines on secure AI system development published late last year, the publication of the draft AI Cyber Security Code of Practice comes amid mixed news for the UK cybersecurity scene. Though the sector itself has grown by 13% since last year according to government figures, half of businesses and almost a third of charities have reported being the victims of breaches in the same period. 

The growing popularity of generative AI among businesses is likely to introduce new avenues of attack for cybercriminals. “GenAI systems are particularly vulnerable to data poisoning and model theft,” said Kevin Curren, professor of cybersecurity at Ulster University and a senior member of the Institute of Electrical and Electronics Engineers. “If companies cannot explain how their GenAI systems work or how they have reached their conclusions, it can raise concerns about accountability and make it difficult to identify and address other potential risks.”

The new AI cybersecurity guidelines will provide businesses with a list of best practices and recommendations on how to solve these challenges, said the NCSC’s chief executive Felicity Oswald. “The new codes of practice will help support our growing cyber security industry to develop AI models and software in a way which ensures they are resilient to malicious attacks,” said Oswald. “Setting standards for our security will help improve our collective resilience and I commend organisations to follow these requirements to help keep the UK safe online.”

Labour Party views on AI incoming

The call for views will extend until 10 July 2024. In the meantime, companies experimenting with AI applications would be well-placed to take their own steps to shore up their security, said Curren.  

“Organisations should consult with data protection experts and keep abreast of regulatory changes,” he explained, which “helps not only in avoiding legal pitfalls but also in maintaining consumer trust by upholding ethical AI practices and ensuring data integrity. Other best practices include minimising and anonymising data use, establishing robust data governance policies, conducting regular audits and impact assessments, securing data environments, and reminding staff of current security protocols.”

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Today’s call for views for both codes of practice should be viewed in the context of the Conservative government’s wider work on AI safety, said its minister for AI and Intellectual Property, Viscount Camrose. Specific policies from the opposition Labour Party, meanwhile, remain scant despite promises of a Green Paper on technology policy promised last year. However, shadow DSIT secretary Peter Kyle promised today that the party will make its views on AI clear in the next few weeks amid a policy push ahead of the general election later this year. 

Read more: AI security urgently needs to be prioritised

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU