View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
December 2, 2022updated 09 Mar 2023 10:08am

Will OpenAI’s ChatGPT be used to write malware?

The advanced AI is already writing code based on very simple prompts. Could it be used to generate malicious programmes to help hackers?

By Claudia Glover

New OpenAI chatbot ChatGPT could be used to generate malware, some analysts have warned. Artificial intelligence-generated code could have a devastating effect on cybersecurity, as human-written defensive software may not be sufficient to protect against it.

As reported by Tech Monitor yesterday, OpenAI released the ChatGPT chatbot this week. Based on the company’s GPT-3 large language AI model, it has already proved itself adept at completing a wide variety of tasks from answering customer queries to generating code and writing complex and accurate prose based on simple prompts.

The AI’s potential uses could also extend to the sphere of cybercrime. One threat intelligence and malware reverse-engineer known on Twitter as @lordx64 claims to have used the chatbot to generate new malware. 

“You can generate post-exploitation payloads using openAI and you can be specific on how/what payloads you should do. This is the cyberwar I signed up for,” he announced in a tweet.

Could ChatGPT generate its own malware?

Artificial intelligence is already widely used in cybersecurity to help spot and respond to attacks quickly. Could ChatGPT take this process further and be used to generate code? Dr Raj Sharma, lead consultant in artificial intelligence and cybersecurity at the University of Oxford.

“This is pretty basic code, but that can be exploited,” Dr Sharma told Tech Monitor. “One of the things that AI is good for is automation. If the hackers can train a chatbot to create like this, it will keep learning, so it could be possible for the hackers to then own a learning-based tool for hacking.”

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

But though ChatGPT is easy to use on a basic level, manipulating it so that it was able to generate powerful malware may require technical skill beyond a lot of hackers, argues Bharat Mistry, technical director at Trend Micro. “We’re seeing more and more use of these chatbots and there’s more intelligence behind them,” Mistry says. “[But] the sophistication of the cybercriminal now would have to step up again because they would need to know how to tweak the engine to get it to do what they need it to do. Then the actual campaign is a much more sophisticated campaign.”

This level of knowledge means it is likely to be the preserve of state-sponsored hackers running cyberespionage attacks, Mistry says. “It’s going to take a very sophisticated gang to do something like this,” he says. “Tools like these in the wrong hands can be dangerous.”

What could you do to stop AI-generated malware?

The potential for AI systems like ChatGPT to generate racist material and hate speech has been much-discussed, and OpenAI has published a paper which acknowledges that its systems could be used to lower costs of disinformation campaigns.

Controls can be put in place to stop this kind of offensive material being generated, Sharma says. “If there are too many requests going on or some kind of message being played many times, we can put a check on that,” he says. “The machine learns it can’t behave like that.”

For malware it would be a different story, he says, and the only way to combat malicious code created by an AI system like ChatGTP would be to use AI to create the protection. “If there is some kind of hacking tool that uses AI, then we have to use AI to understand its behaviour,” Sharma says. “It can’t be done with the normal traditional security controls.”

Mistry agrees AI will be needed as a defensive measure in this scenario, as AI is proficient at creating things that are “polymorphic”, capable of changing shape to adapt to each interaction with a system. “An AI engine could do that rapidly at speed and scale, something that a human couldn’t do,” he adds.

Read more: AI will extend the scale and sophistication of cybercrime

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU