View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

‘Unintended harms’ of generative AI pose national security risk to UK, report warns

While public debate has focused on specific risks posed by automated systems, unknown consequences could be equally damaging.

By Matthew Gooding

Unintended consequences of generative AI use could cause significant harm to the UK’s national security, a new report has warned.

Generative AI could lead to increasingly sophisticated deepfake content being produced, a new report warns. (Photo by Tero Vesalainen/Shutterstock)

The paper from the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute highlights key areas of concern that need to be addressed to protect the nation from threats posed by these powerful technologies.

The unintended security risks of generative AI

Titled Generative AI and National Security: Risk Accelerator or Innovation Enabler?, the report authors point out that conversations about threats have focused primarily on understanding the risks from  groups or individuals who set out to inflict harm using generative AI, such as through cyberattacks or by generating child sex abuse material. It is expected that generative AI will amplify the speed and scale of these activities, and Tech Monitor reported this week that security professionals have highlighted the increased risk posed by AI-powered phishing attacks, which enable cybercriminals to generate more authentic-looking communications to lure in victims.

But the report also urges policymakers to plan for the unintentional risks posed by improper use and experimentation with generative AI tools, and excessive risk-taking as a result of over-trusting AI outputs. These risks could stem from the adoption of AI in critical national infrastructure or its supply chains, and the use of AI in public services.

Private sector experimentation with AI could also lead to problems, with the fear of missing out on AI advances potentially clouding judgments about higher-risk use cases, the authors argue.

Generative AI might offer opportunities for the national security community says Ardi Janjeva, research associate from CETaS at The Alan Turing Institute. But he believes it is “currently too unreliable and susceptible to errors to be trusted in the highest stakes contexts”.

Janjeva said: “Policymakers must change the way they think and operate to make sure that they are prepared for the full range of unintended harms that could arise from improper use of generative AI, as well as malicious uses.”

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

The research team consulted with over 50 experts across government, academia, civil society and leading private sector companies, with most deeming that unintended harms are not receiving adequate attention compared with adversarial threats national security agencies are accustomed to facing.

The report analyses political disinformation and electoral interference and raises particular concerns about the cumulative effect of different types of generative AI technology working to spread misinformation at scale by creating realistic deepfake videos. Debunking a false AI-generated narrative in the hours or days preceding an election would be particularly challenging, the report warns.

It cites the example of an AI-generated video of a politician delivering a speech at a venue they never attended may be seen as more plausible if presented with an accompanying selection of audio and imagery, such as the politician taking questions from reporters and text-based journalistic articles covering the content of the supposed speech.

How to combat AI’s unintended consequences

The Alan Turing Institute says the CETaS report has been released to build on the momentum created by the UK’s AI Safety Summit, which saw tech and political leaders come together to discuss how artificial intelligence can be implemented without causing societal harm.

It makes policy recommendations for the new AI Safety Institute, announced prior to the summit, and other government departments and agencies which could help address both malicious and unintentional risks.

This includes guidance about evaluating AI systems, as well as the appropriate use of generative AI for intelligence analysis. The report also highlights that autonomous AI agents, a popular early use case for the technology, could accelerate both opportunities and risks in the security environment and offer recommendations to ensure their safe and responsible use.

Professor Mark Girolami, chief scientist at the Alan Turing Institute, said: “Generative AI is developing and improving rapidly and while we are excited about the many benefits associated with the technology, we must exercise sensible caution about the risks it could pose, particularly where national security is concerned.

“With elections in the US and the UK on the horizon, it is vital that every effort is made to ensure this technology is not misused, whether intentional or not.”

Read more: The UK is building a £225m AI supercomputer

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU