The UK’s National Cyber Security Centre (NCSC) has warned that AI malware has likely been developed by nation-states and may soon be deployed by organised cybercriminal gangs. In a new report about how AI is set to transform the cybersecurity landscape, the watchdog argued that the technology is likely to increase both the scale and efficiency of ransomware attacks as it lowers the barrier of entry to cybercrime for hacktivists. 

“The emergent use of AI in cyberattacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” said the NCSC’s chief executive, Lindy Cameron. “As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organisations and individuals to follow our ransomware and cybersecurity hygiene advice to strengthen their defences and boost their resilience to cyberattacks.”

An abstract blue background, used to illustrate a story about an NCSC warning about the threat of AI malware.
In a new report, the NCSC has issued a warning about the potentiality of AI malware to increase the scale of the ransomware epidemic currently infecting the private sector. (Photo by Nay sayloms / Shutterstock)

AI malware already a potent threat

The NCSC report also noted the appearance of ‘GenAI-as-a-service’ offers by cybercriminal gangs. “Generative AI…can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing,” it read. It will also afford new opportunities for social engineering, aid in reconnoitring new targets, and make coding new malware and phishing attacks more efficient.

Multiple threat actors are already using these services, said the NCSC. The cybersecurity organisation also speculated that several nation-states are probably in possession of AI-generated malware capable of evading detection by sophisticated antivirus software. However, the NCSC was careful to point out that such programs will only ever be effective if they are trained on “quality exploit data,” which currently appears to be lacking in several known variants of large language models (LLM) marketed to criminals. 

Businesses can defend against AI malware threats using simple measures

This will undoubtedly change going into 2025, said the NCSC. As successful exfiltrations of data accumulate in the next two years, it explained, “the data feeding AI will almost certainly improve, enabling faster, more precise cyber operations.” Businesses should begin shoring up their cyber defences against such attacks now, it added, recommending that even taking the minimal precautions outlined in the NCSC guide on ransomware would reduce the likelihood of a data breach.

Today’s report follows similar warnings from the NCSC about the potential of AI to supercharge cyberattacks, or else be exploited by opportunistic cybercriminals. In November, the organisation collaborated with its international peers to develop new cybersecurity guidelines for AI developers to render them secure by design. This came after a warning from the NCSC that businesses considering deploying LLMs should treat like “beta” products given their vulnerability to prompt injection attacks designed to extract confidential corporate data. 

Monetisation survey

Read more: UK critical infrastructure needs better cybersecurity to withstand attacks, says NCSC