View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
March 28, 2023updated 03 Aug 2023 9:36am

OpenAI’s ChatGPT safeguards ‘trivial to bypass’ for criminals, Europol says

The chatbot is dangerous when in the wrong hands, the European police force has warned.

By Claudia Glover

Safeguards put in place to prevent OpenAI’s ChatGPT writing malicious code can be circumvented, Europol has warned. This means the chatbot can be duped into writing malware, and criminals with little to no technical knowledge could take advantage.

Europol finds the ChatGPT large language model increasingly dangerous in the hands of cybercriminals. (Photo by My Eyes4u/Shutterstock)

In a new report, ‘The Impact of Large Language Models (LLMs) on Law Enforcement’, Europol also warns that chatbots like ChatGPT can be used to write flawless phishing and scam emails, sharpening the risk of these cybercrimes as well. Released in November, ChatGPT has wowed the internet with its ability to produce accurate and detailed text on a wide range of subjects, so it is no surprise to see it being used by criminals.

ChatGPT blocks for malicious code are trivial says Europol

Though OpenAI has put rules in place to ensure that ChatGPT does not deliver malicious code, these checks can be dodged easily, says the Europol report. “Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing,” it says. “If prompts are broken down into individual steps, it is trivial to bypass these safety measures.”

Unsurprisingly, the chatbot is excellent at creating natural, authentic-sounding phrases, making it a useful tool for phishing attacks and other scams that involve plausible-sounding communications. “The context of the phishing email can be adapted easily depending on the needs of the threat actor, ranging from fraudulent investment opportunities to business e-mail compromise and CEO fraud,” continues the report. The tool may therefore also offer criminals new opportunities for crimes involving social engineering, which are widely considered to be one of the most effective modes of entry into a system.

A powerful chatbot could also be more proficient than many humans at creating propaganda. “Not only would this type of application facilitate the perpetration of disinformation, hate speech and terrorist content online, it would also allow users to give it misplaced credibility, having been generated by a machine and thus appearing more objective,” Europol’s report says.

These risks will only get bigger as the LLMs become more advanced. GPT-4, released earlier this month, has already made improvements that may provide even further assistance to potential cybercriminals. The newer model, for example, is better at understanding the context of code, as well as correcting error messages and fixing programming mistakes, making the bar even lower for potential cybercriminals to actively harm an organisation.

ChatGPT and cybercrime

Russian cyber gangs are already bypassing restrictions on the existing chatbots. According to security researchers, there were multiple instances of hackers trying to bypass IP, payment cards and phone number limitations. 

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

“We believe these hackers are most likely trying to implement and test ChatGPT in their day-to-day criminal operations. Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient,” explained Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies told Tech Monitor in January.

The same company uncovered the risk that the chatbot could be used to write malicious code. “ChatGPT has the potential to significantly alter the cyber threat landscape. Now anyone with minimal resources and zero knowledge in code can easily exploit it to the detriment of his imagination,” Shykevich said.

Read more: OpenAI fixes ChatGPT bug that may have breached GDPR

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU