Hackers are becoming more successful in their cyberattacks thanks to recent advances in AI and the newfound availability of automation tools like OpenAI’s ChatGPT, says NordVPN’s chief technology officer, Marijus Briedis. In a recent statement, Briedis observed that the ability of generative AI to create realistic forms, documents and emails that mirror a company style is making it harder to detect the malicious from the real.
Since the launch of ChatGPT by OpenAI in November last year, companies and individuals alike have looked to capitalise on the potential of general-purpose AI. Microsoft, Google and Salesforce have deployed it throughout their products and the cybersecurity industry is using it to monitor network traffic and create early warning systems.
A recent report by IBM found that AI-powered security tools were significantly reducing the cost and impact of a data breach. Big Blue found that a breach without AI cost an average of £3.4m, but one using AI would be reduced to about £1.8m.
The problem, says NordVPN’s Briedis, is that the hackers have access to the same tools and other AI models. He said the number of cyberattacks being detected had doubled since November last year – when ChatGPT launched – and they have become more sophisticated due to the use of AI.
“Hackers learned how to use AI to increase the capacity of their work and make their job easier, quicker and more effective,” Briedis said in a statement released earlier today. “The utilisation of AI tools has facilitated the automation of a significant portion of phishing attacks.” In future, concluded NordVPN’s CTO, this is likely to only escalate and increase the frequency and severity of breaches.
Risk from employees
Cybercriminals are using AI in two main ways: to create higher-quality and more personalised phishing content, and to create malware code more closely adapted to the system they are trying to break into. Additionally, the newfound ability of hackers to use large language models (LLMs) to adapt documents or write original-sounding emails with real company data in real time has made it harder for cybersecurity experts to register such materials as fakes. As such, these items are more likely to be trusted, with more users unwittingly following malicious links or downloading malware.
Another problem, however, may arise from employees freely inputting company data into systems like ChatGPT.
“As AI systems become more prevalent, there is an increased risk of mishandling or misusing sensitive data,” said Briedis. If an employee uses a public AI tool to write a report from confidential data, that data could theoretically be used to further train and fine-tune that AI model. This already seems to be happening.
According to a report from Cyberhaven published earlier this year, 11% of data pasted into ChatGPT by employees looking to save time was confidential corporate information. In the future, this data could be theoretically accessible to anyone using AI, as it will be included in the general mix of data in its memory. Hackers could then use that future model to craft even more convincing cyberattacks.
“Once you get a phishing email with information that is supposed to be confidential, there is a big chance that you will fall into the trap,” argued Briedis.
OpenAI and other AI labs offer an enterprise solution where they refrain from using any such input data to retrain the model. There are also a number of enterprise-friendly solutions from companies like Databricks and IBM that are trained on company data and only accessible to employees of the company. This solves the issue of confidential data from a company potentially being accessible on a public platform, but there are other ways hackers are utilising AI.
However, text, images and reports are not the only items that have been improved by AI, according to Briedis. Hackers are also using LLMs to hone the code they use to steal information for a ransomware attack or shut down systems.
In so doing, cybercriminals can adapt and personalise malware much faster, as well as automate tasks like reconnaissance for monitoring changes or attempts to remove said malware. Hackers can also use AI for scaling up attacks, using large automated botnets for brute-force attacks on corporate systems.
“With this kind of automation, hackers are seriously challenging traditional cybersecurity tools and exploiting their vulnerabilities,” Briedis said.
As such, NordVPN’s CTO recommends companies ensure employees double-check URLs, verify senders and email content before opening files or clicking links and have the most recent software updates and security software in place.