OpenAI’s natural language chatbot ChatGPT is capable of writing code, producing a report on a niche topic and even crafting lyrics for a song. Its success at essay writing has prompted schools to ban its use and Microsoft is said to be incorporating it into Bing but security researchers warn it is being put to much more nefarious uses and the problem is likely to get worse.
Experts from Check Point Research found multiple instances of cybercriminals celebrating their use of ChatGPT in the development of malicious tools, warning that it is allowing hackers to scale existing projects and new criminals to learn the skills more quickly than previously possible.
“I assume that with time, more sophisticated (and conservative) threat actors will also start trying and using ChatGPT to improve their tools and modus operandi, or even just to reduce the required monetary investment,” Sergey Shykevich, threat intelligence group manager at Check Point told Tech Monitor.
ChatGPT was launched at the end of November 2022 and in less than two months has become an essential part of the workflow for software developers, researchers and other professionals. In its first week it went from zero to millions of regular users.
Like all new technology, given enough time and incentive someone will find a way to exploit it and Check Point Research says that is exactly what they are seeing. In underground hacking forums, criminals are creating infostealers, encryption tools and facilitating fraud thanks to the chatbot.
They found three recent cases including one that recreates malware strains for an infostealer, another creating a multi-layer encryption tool and a third writing dark web marketplace scripts for trading illegal goods – all with code written in ChatGPT.
Watermarking and moderation
Last month researchers from the security company put ChatGPT to the test to see if it would produce code that could be put to malicious use, finding it would write executable code and macros to run in Excel. This new report highlights “in the wild” uses of ChatGPT-derived malicious activity.
Tech Monitor asked OpenAI to comment on the findings and how it is working to address malicious use cases, but there was no response at the time of publication. On its page promoting ChatGPT, OpenAI writes: “While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now.”
Shykevich says OpenAI and other developers of large language model AI systems need to improve their engines to identify potentially malicious requests and implement authentication and authorisation tools for anyone wanting to use the OpenAI engine. “Even something similar to what online financial institutions and payment systems currently use,” he says.
OpenAI is already working on a watermarking tool that would make it easier for security professionals, authorities and professors to identify whether text was written by ChatGPT, although it isn’t clear whether that would work for code.
ChatGPT: infostealer and ‘training’
Check Point says it analysed several major underground hacking communities for instances referencing ChatGPT or other forms of artificial intelligence-generated coding tools, finding multiple instances of cybercriminals using the OpenAI tool. “As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all.”
While the tools being built today are “pretty basic” it is only a matter of time before more sophisticated hackers start to turn to AI-based tools to scale up their own tools, including by creating more niche and specific attack vectors that may be unworkable writing code manually.
One example of these ‘simple tools’ is an infostealer that appeared on a thread titled “ChatGPT – Benefits of Malware” on a popular hacking forum. In the post, the author revealed it had used ChatGPT to recreate malware strains described in other publications by feeding the AI tool the descriptions and write-ups. It then shared Python-based stealer code that searches for common file types, copies them to a random folder and uploads them to a hardcoded FTP server.
“This is indeed a basic stealer which searches for 12 common file types (such as Microsoft Office documents, PDFs, and images) across the system. If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web. It is worth noting that the actor didn’t bother encrypting or sending the files securely, so the files might end up in the hands of 3rd parties as well,” the researchers wrote.
The same hacker shared other ChatGPT projects including a Java snippet that downloads a common SSH client and runs it using Powershell. Check Point experts say the individual is likely tech-orientated and was showing less technically capable cybercriminals how to use ChatGPT for their own immediate gain.
Hackers with limited technical skills flock to ChatGPT
Another post found shortly before Christmas included a Python script that the creator said was the first he had ever created. The cybercriminal admitted he made it with the help of OpenAI to boost the scope of the attack. It performs cryptographic operations, made up of a “hodgepodge of different signing, encryption and decryption functions”.
Researchers say it seems benign but implements a range of different functions including generating a cryptographic key, encrypt files in the system and could be adapted to “encrypt someone’s machine completely without any user interaction” for the purpose of ransomware.
“While it seems that [the user] is not a developer and has limited technical skills, he is a very active and reputable member of the underground community. [The user] is engaged in a variety of illicit activities that include selling access to compromised companies and stolen databases. A notable stolen database [the user] shared recently was allegedly the leaked InfraGard database.”
The number of these types of posts seems to be growing, researchers discovered, with hackers also talking about other ways to use AI-based tools to make money quickly, including generating random art with DALL-E 2 and selling them on Etsy or generating an e-book with ChatGPT and selling it online.
“Cybercriminals are finding ChatGPT attractive,” said Shykevich. “In recent weeks, we’re seeing evidence of hackers starting to use it writing malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. “