
AI-powered disinformation campaigns could trigger widespread bank withdrawals in the UK, with 60.8% of individuals exposed to such content indicating they might move their funds, new research suggests. According to a study jointly authored by Say No to Disinfo and Fenimore Harper Communications, for every £10 spent on targeted digital ads, up to £1m in deposits could be withdrawn, highlighting the efficiency and cost-effectiveness of these operations in destabilising financial institutions.
Polling conducted among 500 UK residents revealed that 33.6% were extremely likely and 27.2% somewhat likely to move their money after seeing AI-generated financial misinformation. Based on these findings, researchers projected that just 1,000 such advertisements could prompt at least 405 individuals to withdraw their funds, leading to financial shifts across affected institutions. Given the average UK bank account balance of £8,267, this could result in at least £3.3m in deposits being moved, a figure that could rise to £10.7m if social sharing effects are factored in.
The report detailed the mechanisms behind these campaigns, which use AI to generate misleading headlines and fabricate narratives about bank instability. These messages are disseminated through doppelgänger websites that impersonate credible news sources, mass automated social media posts, and targeted advertisements. In one test case, researchers found that 1,000 tweets could be generated in under a minute, allowing narratives to spread rapidly and gain credibility through repeated exposure.
Cyber operations play a key role in amplifying financial disinformation, according to the report. Hackers can gain access to customer data, enabling more precise targeting of potential victims, while bot networks artificially boost misleading content, making it harder for banks to respond effectively. The collapse of First Republic Bank in 2023 was cited as an example of how online manipulation campaigns, driven by bot networks and coordinated efforts, can accelerate a bank’s downfall.
The study found that financial institutions remain largely unprepared for AI-driven influence operations, as most focus their defences on cybersecurity threats rather than disinformation risks. The report urged banks to implement real-time social media monitoring, invest in disinformation specialists, and integrate threat intelligence with transaction tracking to detect early signs of a bank run.
Regulators were also called upon to take pre-emptive action, including conducting sector-wide risk assessments, developing contingency plans, and fostering closer collaboration between banks, media platforms, and oversight agencies. The research warns that the cost and complexity of AI-generated bank run campaigns have significantly decreased, making them accessible not only to state actors but also to financially motivated groups, activist organisations, and even disgruntled ex-employees.
Regulatory warnings on AI-driven disinformation intensify
Financial authorities have increasingly voiced concerns over the risks posed by AI-driven disinformation. In 2023, The Bank of England warned that rapid advancements in AI and machine learning could create systemic risks to the UK’s financial system, calling for enhanced monitoring and regulatory frameworks.
Similarly, a report by the World Economic Forum identified AI-driven misinformation as one of the most immediate threats to global economic stability. The study highlighted the potential for AI-generated disinformation to disrupt not only financial markets but also political processes, further complicating efforts to mitigate economic risks.
Last month, a report from the Google Threat Intelligence Group (GTIG) raised alarms about the increasing use of AI by cybercriminals and state-backed actors for fraudulent activities, hacking, and spreading propaganda. The findings are based on an in-depth analysis of how these threat groups have been interacting with Google’s AI-powered assistant, Gemini.
The research reveals that advanced persistent threat (APT) groups, cybercriminals, and information operation (IO) actors are leveraging AI to automate phishing schemes, disseminate false information, and manipulate AI models to bypass security systems.