Digital forgeries surged by 244% in 2024, with deepfake attacks occurring every five minutes, according to a new report by the Entrust Cybersecurity Institute. The firm’s 2025 Identity Fraud Report revealed the increasing sophistication of cybercriminal methods and the challenges faced by organisations in combating AI-driven fraud.

Based on data collected by Entrust between 1 September 2023 and 31 August 2024 through its Onfido digital identity verification solution, the study identifies a major shift toward AI-assisted tactics such as digital document manipulation and deepfake technology. These methods, says the cybersecurity organisation, are being employed at unprecedented scales to target organisations during digital onboarding processes.

“The drastic shift in the global fraud landscape, marked by a significant rise in sophisticated, AI-powered attacks, is a warning that all business leaders must heed,” said Entrust’s senior fraud specialist Simon Horswell. “This year’s data underscores this alarming trend, highlighting how fraudsters are rapidly evolving their techniques. These threats are pervasive, touching every facet of business, government, and individuals alike.

Digital forgeries surpass physical counterfeiting

Entrust’s new report found that digital forgeries accounted for 57% of document-related fraud cases in 2024, surpassing physical counterfeiting for the first time. This marks a dramatic shift, with digital forgeries rising by 1,600% since 2021 when physical counterfeits were the dominant method. National ID cards emerged as the most targeted document type, representing 40.8% of global attacks.

The widespread availability of generative AI tools and “as-a-service” platforms has significantly lowered the barriers for cybercriminals, enabling the creation of sophisticated forgeries. These platforms also allow fraudsters to share techniques, further accelerating the adoption of digital document fraud.

Deepfake attacks, enabled by advances in AI, were documented at a rate of one every five minutes in 2024. These hyper-realistic forgeries were used primarily for account takeovers, fraudulent account openings, and phishing scams. The report highlights the use of face-swap applications and other generative AI tools to replicate human features with high accuracy, bypassing traditional biometric verification systems.

The scalability of deepfake fraud has heightened risks for organisations globally, making it increasingly difficult to differentiate legitimate users from fraudulent actors.

The financial sector was the hardest hit, with cryptocurrency platforms facing the highest rate of fraud attempts. Activity in this segment almost doubled, rising from 6.4% in 2023 to 9.5% in 2024. Lending and mortgage services accounted for 5.4% of cases, while traditional banking saw a 13% rise in fraudulent onboarding attempts.

Entrust also found that inflationary pressures in 2024 created additional opportunities for fraudsters, particularly in areas like lending and mortgages, where consumer vulnerability was higher.

A separate survey conducted recently by the US-based National Cybersecurity Alliance (NCA) and CybSafe has highlighted growing concerns about the role of generative AI in cybersecurity. According to the survey, 65% of respondents expressed unease over AI-related cybercrime.

The findings, based on responses from 7,012 individuals across seven countries, point to a significant gap between the rising concern and the level of preparedness in addressing AI-enabled cyber threats. The report underscores the need for organisations and individuals to prioritise education and robust security measures to mitigate the risks posed by advanced AI-driven attacks.

Read more: AI-related cybercrime sparks concern among 65% of global survey participants