Safeguards put in place to prevent OpenAI’s ChatGPT writing malicious code can be circumvented, Europol has warned. This means the chatbot can be duped into writing malware, and criminals with little to no technical knowledge could take advantage.
In a new report, ‘The Impact of Large Language Models (LLMs) on Law Enforcement’, Europol also warns that chatbots like ChatGPT can be used to write flawless phishing and scam emails, sharpening the risk of these cybercrimes as well. Released in November, ChatGPT has wowed the internet with its ability to produce accurate and detailed text on a wide range of subjects, so it is no surprise to see it being used by criminals.
ChatGPT blocks for malicious code are trivial says Europol
Though OpenAI has put rules in place to ensure that ChatGPT does not deliver malicious code, these checks can be dodged easily, says the Europol report. “Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing,” it says. “If prompts are broken down into individual steps, it is trivial to bypass these safety measures.”
Unsurprisingly, the chatbot is excellent at creating natural, authentic-sounding phrases, making it a useful tool for phishing attacks and other scams that involve plausible-sounding communications. “The context of the phishing email can be adapted easily depending on the needs of the threat actor, ranging from fraudulent investment opportunities to business e-mail compromise and CEO fraud,” continues the report. The tool may therefore also offer criminals new opportunities for crimes involving social engineering, which are widely considered to be one of the most effective modes of entry into a system.
A powerful chatbot could also be more proficient than many humans at creating propaganda. “Not only would this type of application facilitate the perpetration of disinformation, hate speech and terrorist content online, it would also allow users to give it misplaced credibility, having been generated by a machine and thus appearing more objective,” Europol’s report says.
These risks will only get bigger as the LLMs become more advanced. GPT-4, released earlier this month, has already made improvements that may provide even further assistance to potential cybercriminals. The newer model, for example, is better at understanding the context of code, as well as correcting error messages and fixing programming mistakes, making the bar even lower for potential cybercriminals to actively harm an organisation.
ChatGPT and cybercrime
Russian cyber gangs are already bypassing restrictions on the existing chatbots. According to security researchers, there were multiple instances of hackers trying to bypass IP, payment cards and phone number limitations.
“We believe these hackers are most likely trying to implement and test ChatGPT in their day-to-day criminal operations. Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient,” explained Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies told Tech Monitor in January.
The same company uncovered the risk that the chatbot could be used to write malicious code. “ChatGPT has the potential to significantly alter the cyber threat landscape. Now anyone with minimal resources and zero knowledge in code can easily exploit it to the detriment of his imagination,” Shykevich said.