View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
The future of cybercrime
In association with Intel vPro
  1. Focus
July 20, 2022updated 31 Mar 2023 4:38pm

How AI will extend the scale and sophistication of cybercrime

Cybercriminals are already using AI to make their attacks more effective and far-reaching. It will only grow more widespread.

By Ryan Morrison

Artificial intelligence has been described as a ‘general purpose technology’. This means that, like electricity, computers and the internet before it, AI is expected to have applications in every corner of society. Unfortunately for organisations seeking to keep their IT secure, this includes cybercrime.

In 2020, a study by European police agency Europol and security provider Trend Micro, identified how cybercriminals are already using AI to make their attacks more effective, and the many ways AI will power cybercrime in future.

“Cybercriminals have always been early adopters of the latest technology and AI is no different,” said Martin Roesler, head of forward-looking threat research at Trend Micro, when the report was published. “It is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works.”

Just as tech leaders need to understand how AI can help their organisations achieve their own aims, it is crucial to understand how AI will bolster the sophistication and scale of criminal cyberattacks, so they can begin to prepare against them.

AI offers cybercriminals a numbers of ways to make their social engineering attacks more effective. (Image by Urupong / iStock)

How AI is used for cybercrime today

AI is already being used by cybercriminals to improve the effectiveness of traditional cyberattacks. Many applications focus on bypassing the automated defences that secure IT systems.

One example, identified in the Europol report, is the use of AI to craft malicious emails that can bypass spam filters. In 2015, researchers discovered a system that used ‘generative grammar’ to create a large dataset of email texts. “These texts are then used to fuzz the antispam system and adapt to different filters in order to identify content that would no longer be detected by spam filters,” the report warns.

Researchers have also demonstrated malware that uses a similar approach to antivirus software, employing an AI agent to find weak spots in the software’s malware detection algorithm.

AI can be used to support other hacking techniques, such as guessing passwords. Some tools use AI to analyse a large dataset of passwords recovered from public leaks and hacks on major websites and services. This reveals how people modify their passwords over time – such as adding numbers on the end or replacing ‘a’ with ‘@’.

Work is also underway to use machine learning to break CAPTCHAs found on most websites to ensure the user is human, with Europol discovering evidence of active development on criminal forums in 2020. It is not clear how far advanced this development is but, given enough computing power, AI will eventually be able to break CAPTCHAs, Europol predicts.

AI and social engineering

Other uses of AI for cybercrime focus on social engineering, deceiving human users into clicking malicious links or sharing sensitive information.

First, cybercriminals are using AI to gather information on their targets. This includes identifying all the social media profiles of a given person, including by matching their user photos across platforms.

Once they have identified a target, cybercriminals are using AI to trick them more effectively. This includes creating fake images, audio and even video to make their targets think they are interacting with someone they trust.

One tool, identified by Europol, performs real-time voice cloning. With a five second voice recording, hackers can clone anyone’s voice and use it to gain access to services or deceive other people. In 2019, the chief executive of a UK-based energy company was tricked into paying £200,000 by scammers using an audio deep fake.

Even more brazen, cybercriminals are using video deep fakes – which make another person’s face appear over their own – in remote IT job interviews in order to get access to sensitive IT systems, the FBI warned last month.

In addition to these individual methods, cybercriminals are using AI to help automate and optimise their operations, says Bill Conner, CEO of cybersecurity provider SonicWall. Modern cybercriminal campaigns involve a cocktail of malware, ransomware-as-a-service delivered from the cloud, and AI-powered targeting.

These complex attacks require AI for testing, automation and quality assurance, Conner explains. “Without the AI it wouldn’t be possible at that scale.”

The future of AI-powered cybercrime

The use of AI by cybercriminals is expected to increase as the technology becomes more widely available. Experts predict that this will allow them to launch cyberattacks at far greater scale than is currently possible. For example, criminals will be able to use AI to analyse more information to identify targets and vulnerabilities, and attack more victims at once, Europol predicts.

They will also be able to generate more content with which to deceive people. Large language models, such as OpenAI’s GPT-3, which can be used to generate realistic text and other outputs, may have a number of cybercriminal applications. These could include mimicking an individual’s writing style or creating chatbots that victims confuse for real people.

AI-powered software development, which businesses are beginning to use, could also be employed by hackers. Europol warns that AI-based ‘no code’ tools, which convert natural language into code, could lead to a new generation of ‘script kiddies’ with low technical knowledge but the ideas and motivation for cybercrime.

Malware itself will become more intelligent as AI is embedded within it, Europol warns. Future malware could search documents on a machine and look for specific pieces of information, such as employee data or protected intellectual property.

Ransomware attacks, too, are predicted to be enhanced with AI. Not only will AI help ransomware groups find new vulnerabilities and victims, but will also help them avoid detection for longer, by ‘listening’ for the measures companies use to detect intrusions to their IT systems.

As the ability of AI to mimic human behaviour evolves, so too will its ability to break certain biometric security systems, such as those which identify a user based on the way they type. It could also spoof realistic user behaviour – such as being active during specific hours – so that stolen accounts aren’t flagged by behavioural security systems.

Lastly, AI will enable cybercriminals to make better use of compromised IoT devices, predicts Todd Wade, an interim CISO and author of BCS’ book on cybercrime. Already employed to power botnets, these devices will be all the more dangerous when coordinated by AI.

How to prepare for AI cybercrime

Protecting against AI-powered cybercrime will require responses at the individual, organisational and society-wide levels.

Employees will need to be trained to identify new threats such as deep fakes, says Wade. “People are used to attacks coming in a certain way,” he says, “They are not used to the one-off, maybe something that randomly appears on a Zoom call or WhatsApp message and so are not prepared when it happens.”

In addition to the usual cybersecurity best practices, organisations will need to employ AI tools themselves to match the scale and sophistication of future threats. “You are going to need AI tools just to keep up with the attacks and if you don’t use these tools to combat this there is no way you’ll keep up,” says Wade.

But the way in which AI is developed and commercialised will also need to be managed to ensure it cannot be hijacked by cybercriminals. In its report, Europol called on governments to ensure that AI systems adhere to ‘security-by-design’ principles, and develop specific data protection frameworks for AI.

Today, many of the AI capabilities discussed above are too expensive or technically complex for the typical cybercriminal. But that will change as the technology develops. The time to prepare for widespread AI-powered cybercrime is now.

Read more: This is how GPT-4 will be regulated

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU