New OpenAI chatbot ChatGPT could be used to generate malware, some analysts have warned. Artificial intelligence-generated code could have a devastating effect on cybersecurity, as human-written defensive software may not be sufficient to protect against it.

As reported by Tech Monitor yesterday, OpenAI released the ChatGPT chatbot this week. Based on the company’s GPT-3 large language AI model, it has already proved itself adept at completing a wide variety of tasks from answering customer queries to generating code and writing complex and accurate prose based on simple prompts.

The AI’s potential uses could also extend to the sphere of cybercrime. One threat intelligence and malware reverse-engineer known on Twitter as @lordx64 claims to have used the chatbot to generate new malware. 

https://twitter.com/lordx64/status/1598023663328014336

“You can generate post-exploitation payloads using openAI and you can be specific on how/what payloads you should do. This is the cyberwar I signed up for,” he announced in a tweet.

Could ChatGPT generate its own malware?

Artificial intelligence is already widely used in cybersecurity to help spot and respond to attacks quickly. Could ChatGPT take this process further and be used to generate code? Dr Raj Sharma, lead consultant in artificial intelligence and cybersecurity at the University of Oxford.

“This is pretty basic code, but that can be exploited,” Dr Sharma told Tech Monitor. “One of the things that AI is good for is automation. If the hackers can train a chatbot to create like this, it will keep learning, so it could be possible for the hackers to then own a learning-based tool for hacking.”

But though ChatGPT is easy to use on a basic level, manipulating it so that it was able to generate powerful malware may require technical skill beyond a lot of hackers, argues Bharat Mistry, technical director at Trend Micro. “We’re seeing more and more use of these chatbots and there’s more intelligence behind them,” Mistry says. “[But] the sophistication of the cybercriminal now would have to step up again because they would need to know how to tweak the engine to get it to do what they need it to do. Then the actual campaign is a much more sophisticated campaign.”

This level of knowledge means it is likely to be the preserve of state-sponsored hackers running cyberespionage attacks, Mistry says. “It’s going to take a very sophisticated gang to do something like this,” he says. “Tools like these in the wrong hands can be dangerous.”

What could you do to stop AI-generated malware?

The potential for AI systems like ChatGPT to generate racist material and hate speech has been much-discussed, and OpenAI has published a paper which acknowledges that its systems could be used to lower costs of disinformation campaigns.

Controls can be put in place to stop this kind of offensive material being generated, Sharma says. “If there are too many requests going on or some kind of message being played many times, we can put a check on that,” he says. “The machine learns it can’t behave like that.”

For malware it would be a different story, he says, and the only way to combat malicious code created by an AI system like ChatGTP would be to use AI to create the protection. “If there is some kind of hacking tool that uses AI, then we have to use AI to understand its behaviour,” Sharma says. “It can’t be done with the normal traditional security controls.”

Mistry agrees AI will be needed as a defensive measure in this scenario, as AI is proficient at creating things that are “polymorphic”, capable of changing shape to adapt to each interaction with a system. “An AI engine could do that rapidly at speed and scale, something that a human couldn’t do,” he adds.

Read more: AI will extend the scale and sophistication of cybercrime