View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
December 7, 2020updated 31 Mar 2023 10:00am

Malicious use of AI is rising – CIO voice clones among active techniques

"The threat landscape is evolving at a rate which is absolutely relentless."

By Claudia Glover

The malicious use of machine learning (ML) to enhance the capabilities of cybercriminals and their tools is rising rapidly. Now, a new report by security company Trend Micro, Europol and the United Nations has outlined the growing dangers going into 2021 of AI-enhanced business email compromise (BEC) attacks, deep fakes, voice cloning, and the ability to pre-emptively assess the detection features and detection rule chain of anti-virus products.

As one of the report’s authors tells Tech Monitor, AI-enhanced voice clones (including of one company’s CIO) have already been used in attacks.

The report on malicious uses of Artificial Intelligence (AI) and ML outlines risks present, near-term and further into the future as cybercriminals become more adept at creating, adapting and selling AI and ML-optimised tools using the rapidly growing dark web Crime-as-a-Service (CaaS) business model.

“We have seen the threat landscape evolving at a rate which is absolutely relentless,” explains Bharat Mistry, principal security strategist at Trend Micro, noting that many of these evolutions in criminal innovation are already being widely used. “The whole criminal underground has expanded in a way that we never thought would ever happen.”

Exfiltrating unstructured data

One AI-enhanced tool that is suspected of being a growing hit in cybercrime circles (as well as enterprises) is named entity recognition (NER), which gives users the power to exfiltrate unstructured data off the web.

“A large amount of the world’s data in any business now lives in documents,” says Adam Etches, the technical director at Chorus Intelligence, a company that provides intelligence analysis software to law enforcement. “This is effectively the ability to exfiltrate documents, and then automatically parse that to find those interesting pieces of information, such as account names, email addresses, passwords and bank account details.”

According to Etches, such tools are widely available on GitHub and have a comparatively low barrier to entry; they are used across sectors: “We use NER within our tools to help the police to understand their unstructured data in just the same way that criminals could try to exploit them.”

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

The business process compromise scam

So-called “business process compromise” scams, which could help a cybercriminal hold a production line to ransom, are also being turbo-charged by emerging technologies, the report emphasises. This could see attackers use compromised sensors (for example, “through the use of a botnet, SIM-jacking, or other means”) to feed ML models underpinning production with false sensor telemetry, opening the door to “large-scale persistent fraud”.

As Mistry notes: “In manufacturing it’s becoming common to see production lines and IoT networks coming together with the cloud.”

If a threat actor feeds false telemetry or other readings it would be enough to push the company to recall the entire production line, but be minor enough to go unnoticed until the criminal chooses to notify them: “They’ve got the production line at ransom. They’re not crippling them, but they’re holding them to physical ransom,” notes Mistry of the technique.

Voice cloning and writer synthesis

Online scams such as BEC and phishing attacks are becoming craftier and more pervasive as AI advancements make them harder to catch.

For instance, robocalling has recently received an AI makeover. To add validity to an initial email-based scam, the victim is told that they will receive a call to confirm information/follow-up on the email. Hugely natural-sounding, AI-enhanced voice clones can make such calls far more compelling than the robotic voices used in the past.

AI-enhanced voice cloning is already being used to try and scam customers of Trend Micro, Mistry notes, with some convincingly authentic calls.

As he notes: “The primary use I’ve seen of voice cloning is business email compromise attacks where you see an email and they say in the email that you’re going to get a call from the CEO or CFO, and then a voice clone comes in. I was talking to a customer last year, who played me a recording of the CIO of his organisation. He knew it was a fake because the CIO was on holiday and would not have phoned, but listening to it you wouldn’t have known.”

However, with the use of voice cloning and the misuse of something called writer synthesis, scam calls and emails from the CEO are here.

Writer synthesis is a technique used by Trend Micro, among others, to judge the authenticity of a person’s speech. Mistry explains how easy this is to use to a cybercriminal’s advantage: “We look at emails, we look at grammatical structure, time of day and language use, then you have this very simple system that says; if you see enough deviation from that, then that wasn’t the author. Assuming you can get hold of the digest, which is fully available, get it to learn how people have written messages and blogs, you could generate [seemingly authentic] emails at will.”

The culmination of this sort of scam is the use of the much-discussed deep fakes phenomenon to feed people seemingly trustworthy information. Independent of the format (sound, image, or video), synthetic media can be generated using GANs, an unsupervised machine learning application invented in 2014 that has revolutionised the deep-learning field, the report notes; and while they have until now required large training sets, that is rapidly changing.

While manipulated audio has been seen in a range of attacks, deep fake video – despite the hype – does not appear to have widely taken off in the cybercrime realm; perhaps for the simple reason that less data-science-hungry attacks; for example, exploiting unpatched software, remain effective, cheap and easy.

Yet crucially, it is nearly impossible to legislate against the technology itself without curbing the growth and progress of the entire AI and machine learning trend, says Mistry: “To put legislation in place would be difficult. How would you police it? How would you monitor it? How would you know whether it’s being done? If you start going down that path, I would say you’re all of a sudden in hot water, because you’re in this territory of Big Brother is watching everything that’s going on. And that’s the last thing you want, because then you stifle innovation, you stifle a lot of growth.”

As new tools become available for the use of the defender, the opportunist will always be beside them ready to manipulate those tools to their advantage. The internet has always contained an ongoing game of cat and mouse between criminals and law enforcement and these tools are no different.

The best line of defence is being aware of new capabilities and how they can be useful for cybercrime, as and when they arise.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.