View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Focus
May 1, 2023

Large language models will transform corporate cybersecurity – for good and ill

ChatGPT scares some cybersecurity professionals. Others say LLMs could be an asset for the industry.

By Stephanie Stacey

ChatGPT can write sonnets, screenplays and scams — that is, if you trick it into ignoring its own safety guardrails. This facility for fraud has become a major source of concern for some cybersecurity professionals, who fear the chatbot and others like it could trigger a wave of sophisticated cybercrime. It could also turn out to be an asset for those manning the corporate defences. Experts say large language models (LLMs) could help accelerate efforts to detect data breaches and pinpoint organisational vulnerabilities in advance of an attack — the very same vulnerabilities that cybercriminals might use these LLMs to exploit.

ChatGPT’s rise was astronomical, with the model amassing 100 million active users just two months after launching. Companies are scrambling to avoid being left behind, says Arvind Raman, CISO of phone manufacturer-turned-cybersecurity firm BlackBerry. But many security vendors are confronting an ethical bind in their experiments with turning LLMs into corporate network sentinels. “We don’t want to miss out on an opportunity to leverage this kind of technology but, at the same time, we also have to be balancing the risks that come with it,” says Raman. “That’s the hardest job.”

Firms also don’t want to be on the back foot when it comes to protecting their data. Attackers are improving their tactics with the help of LLMs, says Raman, so cybersecurity professionals will also need to figure out how to use these same tools to level the playing field. Hence why BlackBerry — like many other cybersecurity teams — are moving only slowly forward with their experiments in generative AI. Everyone in the industry, says Raman, is in “cautious learning mode.”

A robot, representing LLMs, defends a medieval castle. Large Language Models cybersecurity
LLMs could help shore up corporate cybersecurity — perhaps even helping to guard the metaphorical battlements against cybercriminal LLMs. (Image: Shutterstock)

Large Language Models in cybersecurity

LLMs’ greatest asset for cybersecurity lies in their efficiency. The ability of such foundation models to rapidly process large datasets can, in theory, be leveraged to help organisations swiftly pinpoint incoming threats and existing vulnerabilities. Security researcher Patrick Ventuzelo, founder of cybersecurity services platform Fuzzing Labs, recently used GPT-4 to find zero-day vulnerabilities — undiscovered and unpatched flaws — in snippets of code. “It’s mind-blowing,” said Ventuzelo, who shared his work on Youtube. He remained confident, however, that the LLM wasn’t going to be fully self-sufficient (“ChatGPT will not take my job,” he pronounced. “I still have some stuff to do.”)

The accelerative power of LLM’s could also be a major boon for active defence. Raman, for his part, is sceptical when companies say that ransomware is something that can be prevented altogether, rather than mitigated. “I think it’s a matter of whether we can detect it early enough to stop the impact,” says Raman — a task tailor-made, one would hope, for a super-fast, custom-built generative AI program.

The Human Machine Lab at Ontario Tech University has been working on one such system to help organisations detect data breaches faster. The tool, called Chunk-GPT3, uses LLMs to generate fake user credentials, or ‘honeywords.’ These are then merged with real credentials in the hopes that an unknowing hacker might try to use the fake ones — inadvertently triggering an alarm and alerting administrators to the data breach. 

“This approach isn’t new,” says Miguel Vargas Martin, the system’s co-creator and a computer science professor at Ontario Tech, but it is one that could undoubtedly benefit from an injection of AI efficiency. The team’s experiment has left the professor encouraged that similar models could be used to augment cybersecurity operations in the future. “Automation is the direction AI is taking us,” says Martin. 

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Large Language Models cybersecurity
Generative AI tools like ChatGPT are rapidly transforming the cybersecurity industry, even if we don’t yet know where these tools are headed. (Image by Giulio Benzin/Shutterstock)

A double-edged sword 

It also seems to be informing the strategies of cybercriminals. Generative AI can help attackers — even unskilled ones — assemble sophisticated malware and even more sophisticated phishing emails without all the horrendous spelling mistakes that give them away to standard junk filters, explains Raman. Moreover, LLMs can also make it harder to detect unexpected activity. “Generative AI has the potential to emulate real data such as user activity, network traffic, credentials, or profiles — thus circumventing any type of detection mechanism that looks for anomalies,” says Martin. 

Are we on a path toward IT teams and cybercriminals watching on the sidelines as their respective large language models battle each other to exhaustion? Probably not, says Humayun Qureshi, co-founder of Digital Bucket Company, a data services provider. “AI can help people to do tasks a lot quicker and a lot more efficiently, but I don’t think it can do it single-handedly,” says Qureshi. Raman agrees, cautioning that traditional corporate defences, like employee training, still matter. “The last line of defence is the end user,” he says. 

Qureshi and Raman believe that LLMs have left cybersecurity professionals — and anyone keen to protect their data — in a tricky situation. “As they stand today, I think the risks seem to outweigh the benefits,” says Raman. Qureshi warns that some cybersecurity providers might be pursuing artificial intelligence for the wrong reasons — keener to harness a popular buzzword in their branding rather than make meaningful improvements to their systems.

Despite his present pessimism, Raman says it’s important not to hold a static view on these technologies. Cybersecurity professionals will ultimately need to figure out how to use LLMs to their advantage — or risk getting left behind. “Eventually there will come a point where Generative AI is not something that we can avoid,” he says. “You’ve got to keep adapting.”

Read more: Here’s how OpenAI’s ChatGPT can be used to launch cyberattacks

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU