US intelligence officials claimed that Russia, China, and Iran are increasing their efforts to produce artificial intelligence (AI) generated content aimed at influencing the 2024 Presidential elections. However, these nations are currently struggling to create material that can effectively evade existing detection tools.

During their fourth election-related briefing this year, representatives from the Office of the Director of National Intelligence (ODNI) and the Federal Bureau of Investigation (FBI) indicated ongoing observations of Russian and Iranian operatives using generative AI to mislead US voters and incite discord.

Although the advent of generative AI has enhanced some aspects of these operations, particularly in the translation of content into multiple languages, intelligence officials have characterised generative AI as a “malign influence accelerant” rather than a “revolutionary” instrument.

Despite the high volume of AI-generated propaganda produced by these countries, they have yet to surmount several challenges that inhibit their ability to fully exploit this emerging technology for voter deception.

“The risk to US elections from foreign AI-generated content depends on the ability of foreign actors to overcome restrictions built into many AI tools and remain undetected, develop their own sophisticated models or strategically target and disseminate such content,” said a senior official from the ODNI. “Foreign actors are behind in each of these three areas.”

Russia, Iran turn out to be poor at using AI-generated content in cyber operatoins

While details regarding the specific reasons for these challenges were not disclosed, officials noted that tools designed to detect and identify synthetically manipulated media have been effective at identifying such efforts so far this year. The same official remarked that the perceived limitations of AI-generated content can often be attributed to its lack of believability, making it detectable through various tools.

Among the adversaries, Russia has been identified as the most active, generating the largest volume of content across text, audio, imagery, and video.

Iranian operatives have also employed generative AI to produce social media posts and mimic news organisations, targeting both English-speaking and Spanish-speaking voters to polarise opinions on presidential candidates and issues such as the Israel/Gaza conflict.

In addition, China conducted a significant AI influence operation during Taiwan’s elections earlier this year. China is now using AI to shape global perceptions of the country and amplify divisive political issues within the US. However, intelligence officials have indicated that they have not observed China-linked actors actively attempting to influence US elections.

In line with the growth in generative AI tools over the last two years, experts have been working to build software capable of accurately detecting and flagging fake or manipulated media. Given that many detection tools have also been created using AI, experts have cautioned that verifying authentic media may devolve into a continuous cycle, with malicious actors consistently adapting their methods to evade detection.

Thus far, this scenario has not fully unfolded. In countries such as Taiwan, India, and the US, attempts to mislead voters through deepfake media have often been rapidly identified as digital forgeries, raising substantial doubt about their authenticity.

Intelligence officials have refrained from providing specific details regarding the scale or impact of these efforts, indicating that such analysis would require monitoring social media activity protected under First Amendment rights. Nevertheless, US officials have stated they are closely observing indications that bad actors may be enhancing their efforts, whether by developing advanced models or improving content amplification strategies.

Another senior official from the ODNI confirmed that discussions with AI companies are part of these monitoring efforts, particularly regarding tools that could be utilised throughout the lifecycle of a foreign influence campaign. This official explained that dialogues with technology companies centre on the evolving tools, tactics, and procedures of foreign adversaries, as well as on authentication and attribution methods, which provide a useful platform for sharing insights.

Read more: UK, US, and Canada join forces on cybersecurity and AI research