Russian cybercriminals are repeatedly trying to find new ways to bypass restrictions in place to prevent them from accessing OpenAI‘s powerful chatbot ChatGPT. Security researchers discovered multiple instances of hackers trying to bypass IP, payment card and phone number limitations.
Since its launch in November 2022, ChatGPT has become an essential workflow tool for developers, writers and students alike. It has also proved to be a useful addition to the cybercriminal’s arsenal, with evidence of hackers using it to write malicious code and improve phishing emails. Its potential to be used for nefarious reasons means OpenAI has set limits on how the tool can be deployed due to the “interest of hackers in ChatGPT to scale malicious activity” faster than otherwise possible. It is also geo-blocked to stop Russian users accessing the system.
But scouring underground hacking forums, researchers from Check Point Software discovered multiple instances of hackers from Russia talking about ways to circumvent these protections. Examples include an instance where a group from Russia pose a question on how to use a stolen payment card to pay for an OpenAI account and use the API to access the system if they can’t get into ChatGPT the regular way. The system is currently a “research preview” and free to use, whereas the API includes a charge for tokens used during text and code-generation sessions.
Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies said it isn’t particularly difficult to bypass the OpenAI restriction measures. “Right now, we are seeing Russian hackers already discussing and checking how to get past the geo-fencing to use ChatGPT for their malicious purposes,” he says.
“We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations. Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient.”
The forums also feature a number of tutorials in Russian offering semi-legal online SMS services and how to use them to register for ChatGPT and make it appear you are in another country where the service isn’t geo-blocked.
One section of the forum informs hackers they simply need to buy a virtual number, pay 2 cents for the number and use it to get a code from OpenAI. These temporary phone numbers can come from anywhere in the world and new numbers can be generated as needed.
Infostealers and malicious use of ChatGPT
The news comes off the back of earlier research that found cybercriminals were posting examples of how to make use of ChatGPT for illicit activities on these same hacker forums. This includes the creation of infostealers that are currently “pretty basic” but are likely to get more advanced as AI tools become more widely used.
One example of these ‘simple tools’ is an infostealer that appeared on a thread titled “ChatGPT – Benefits of Malware” on a popular hacking forum. In the post, the author revealed it had used ChatGPT to recreate malware strains described in other publications by feeding the AI tool the descriptions and write-ups. It then shared Python-based stealer code that searches for common file types, copies them to a random folder and uploads them to a hardcoded FTP server.
“Cybercriminals are finding ChatGPT attractive,” Shykevich says. “In recent weeks, we’re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. “
OpenAI says it takes action to moderate the use of ChatGPT, including restrictions on the types of requests that can be made but some “fall through the cracks”. There was also evidence of prompts being used to “trick” ChatGPT into providing potentially harmful code examples under the guise of being for research, or as a fictional example.
ChatGPT and the potential for misinformation
The news comes as OpenAI confirmed it was working with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to examine ways large language models, such as GPT-3, the basis for ChatGPT could be used to spread disinformation.
The model could be used to drive down the cost of running influence operations, placing them within reach of new types of cybercriminals where previously only the large players had access to this type of content. “Likewise, propagandists-for-hire that automate production of text may gain new competitive advantages.”
It would also result in different behaviours, increasing the scale of attacks and allowing for more personalised and targeted content than would otherwise be possible due to the costs involved in manual creation. Finally, they found the text creation tools are capable of generating more impactful and persuasive messaging compared to most human-generated propaganda where often a state hacker might lack the requisite linguistic or cultural knowledge of their target.
“Our bottom-line judgement is that language models will be useful for propagandists and will likely transform online influence operations,” researchers from OpenAI declared. “Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation-states may invest in the technology themselves.”
Read more: This is how GPT-4 will be regulated
Homepage image courtesy Ascannio/Shutterstock