View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

ChatGPT boom drives surge in AI-powered malware targeting Facebook business accounts

As the AI chatbot craze grows, cybercriminals are taking advantage. Should platforms do more to block their efforts?

By Claudia Glover

The use of artificial intelligence to spread malware is increasing month-by-month as platforms like YouTube and Facebook are used to propagate malicious links via AI generated content and a fake ChatGPT extension. While the rise of generative AI chatbots like ChatGPT was always likely to be accompanied by a spike in cybercrime, social media sites should be more proactive in policing their platforms for harmful content as hackers become more advanced, researchers warn. 

ChatGPT and AI used to lure victims into infostealing scams. (Photo by Chrispictures/Shutterstock)

Both YouTube and Facebook have seen their platforms abused by cybercriminals to target their users. Increasingly these malware campaigns are designed using AI and ChatGPT, making them harder to detect.

“The threat actors are getting so sophisticated that it becomes hard for even well-aware users to distinguish between what’s good and what’s bad,” said Allan Liska CSIRT at security vendor Recorded Future.

AI and ChatGPT used to propagate malware campaigns on YouTube and Facebook

A new report from security company CloudSEK states that since November 2022 there has been a 200%-300% month-on-month increase in videos containing infostealer malware being uploaded to YouTube.

The videos masquerade as step-by-step guides on how to download expensive software like Photoshop, Premiere Pro and Autodesk 3DS Max for free. Links to the malware are concealed in the content’s description, and stealers found in the malicious videos include Vidar RedLine and Racoon.

Often AI-generated videos are being used in the campaigns because footage featuring humans with certain facial features have been found to be more popular, as they are more familiar and trustworthy.

“We have observed that every hour five to 10 ‘crack software’ download videos containing malicious links are uploaded to YouTube,” the report says. “At any given time, if a user searches for a tutorial on how to download a cracked software, these malicious videos will be available.”

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

In a similar style of attack, cybercriminals are luring in victims using a fake ChatGPT add-on for the Chrome browser. The malicious stealer extension is called “Quick Access to Chat CPT” and is promoted on Facebook sponsored posts, advertising a quick way to access the popular chatbot. Instead it implements a malvertising campaign.

The extension gives users access ro ChatGPT’s API, but also harvests huge amounts of information from the browser such as cookies and credentials.

How the bogus ChatGPT extension works

Once downloaded, the extension becomes an integral part of the browser, allowing it to send requests to any other service, as if the browser owner themselves were administering the commands. “This is crucial as the browser, in most cases, already has an active and authenticated session with almost all your day-to-day services, e.g. Facebook,” explains a report from security company Guardio.

If the victim has a Facebook business account, it will be taken over completely. “By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus. This allows it to push Facebook paid ads at the expense of its victims in a self-propagating worm-like manner,” continues the report.

“Once the victim opens the extension windows and writes a question to ChatGPT, the query is sent to OpenAI‘s servers to keep you busy – while in the background it immediately triggers the harvest.”

Tech Monitor has contacted YouTube and Facebook for comment.

Cybercriminals using AI and ChatGPT is to be expected, says Liska, but their scams are rapidly increasing in sophistication. “Our advice is always, ‘take a minute to think about what you’re doing. Is that really a ChatGPT application or is it a scam?’,” he says.

But it’s getting harder and harder to identify the fakes, Liska adds. “We’re in a sort of ‘Wild West’ ecosystem where it can be hard to distinguish between what’s illegitimate and what’s real,” he says.

“We need to start holding both software companies and platforms accountable for the bad things that happen on their network, when they allow this kind of malware to propagate on their platform without taking steps to address it.”

Read more: Malware infects more than 14,000 WordPress sites

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.