A report from Google Threat Intelligence Group (GTIG) has warned that cybercriminals and state-affiliated actors are increasingly leveraging AI for fraud, hacking, and propaganda campaigns. The findings are based on a comprehensive analysis of how threat actors interacted with Google’s AI-powered assistant, Gemini. The research reveals how advanced persistent threat (APT) groups, cybercriminals, and information operations (IO) actors are using AI to automate phishing scams, spread misinformation, and manipulate models to bypass security controls. While AI has not yet introduced breakthrough cyberattack capabilities, threat actors are using it to refine and scale existing tactics.

“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” wrote the GTIG. “For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity. For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques.”

Threat actors exploiting AI for cybercrime, espionage, and disinformation

GTIG’s research identifies a rise in the number of cybercriminals exploiting AI for business email compromise (BEC), phishing attacks, and malware development. Underground marketplaces are actively selling jailbroken AI models that bypass security restrictions, enabling automated cybercrime.

Illicit AI tools such as FraudGPT and WormGPT have been promoted in underground forums, offering capabilities that include automated phishing, AI-assisted malware creation, and cybersecurity evasion techniques. GTIG observed that cybercriminals are using AI to craft highly deceptive emails, manipulate digital content, and execute fraud schemes at scale.

The report details how state-backed APT groups are exploring AI to aid cyber espionage and reconnaissance. GTIG’s findings indicate that Iranian, Chinese, North Korean, and Russian APT actors have attempted to use AI to analyze vulnerabilities, assist in malware scripting, and conduct reconnaissance on targets.

However, GTIG found no evidence that AI has fundamentally improved the attack capabilities of these groups. APT actors are primarily using AI for automating research, translating materials, and generating basic code rather than developing novel cyberattack techniques. Attempts to override AI safety mechanisms and generate explicitly malicious content were largely unsuccessful.

The report also examines how IO actors are leveraging AI for propaganda and misinformation. GTIG observed that Iranian and Chinese IO groups used AI to refine messaging, generate politically motivated content, and enhance social media engagement strategies. Russian actors explored AI for automating content creation and increasing the reach of disinformation campaigns.

Some groups experimented with AI-generated videos and synthetic images to create more persuasive narratives. While AI has not yet transformed influence campaigns, threat actors are actively testing its potential to scale and refine disinformation tactics.

To counter the growing misuse of AI, Google has reinforced its AI security measures under the Secure AI Framework (SAIF). The tech giant claimed that it has expanded threat monitoring, adversarial testing, and real-time abuse detection to mitigate risks associated with AI-powered threats.

Read more: Google launches Gemini 2.0 with multimodal output and advanced AI tools