View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
February 1, 2024

Confidence in deepfake detection wavers as 30% of firms are predicted to shun using facial biometrics in isolation by 2026

Perceived ability of fraudsters to evade deepfake detection is prompting CISOs to consider pairing facial biometrics with other forms of ID verification like device profiling, according to new research from Gartner.

By Greg Noone

Up to a third of enterprises will stop using deepfake detection methods in isolation by 2026, according to new research from Gartner. Such techniques are often used in onboarding processes that require ID verification using facial biometrics, where a photo or video of the user is compared against an official identity document. However, the increasing scale and sophistication of deepfake attacks designed to undermine techniques designed to assess the liveness of individuals during such ID verification is forcing businesses to consider augmenting techniques that rely on facial biometrics with other processes, like behavioural analytics or device detection.

Deepfake detection is currently a flourishing area of research in AI, with multiple startups dedicated to finding new ways to blunt the adversarial attacks currently used by synthetic image creation software to spoof ID verification systems. However, it seems that few of those enterprises that are aware of the arms race between deepfake detectors and hackers believe that the former are winning, argued Gartner analyst Akif Khan. “Current standards and testing processes to define and assess [presentation attack detection] PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” said Khan.

An AI-generated image of a man looking into the mirror and seeing a masked reflection, used to illustrate a piece about deepfake detection.
A new study from Gartner contends that a third of enterprises by 2026 are unlikely to use deepfake detection in isolation to combat the threat of ID fraud. (Photo by Shutterstock)

Enterprises increasingly uneasy about the integrity of deepfake detection

Gartner’s predictions about enterprise unease about the capabilities of current deepfake detection applications were based on hundreds of interviews with ID solution vendors and their end users, Khan told Tech Monitor, most of which can be found in the financial services sector. Many of these conversations with IT managers, HR professionals and, increasingly, CISOs in the latter grouping revealed increasing anxiety about the increasing scale and sophistication of deepfake attacks generally and the reliability of current ID verification technology for remote onboarding. 

“Other clients who were making buying decisions were then questioning [whether] they should be investing in this technology, at this time,” Khan told Tech Monitor. Concern among vendors, too, about the increasing sophistication of deepfake attacks rose throughout 2023. Khan recalls being shown one example of an attempted fraud involving the use of six different images of individuals of different ethnicities and genders, with the only clue that all were deepfakes being the placement of three or four hairs on the forehead of each.

Informal analysis from Gartner indicates that 15% of fraudulent identity presentations involve attempts to undermine deepfake detection. These are usually so-called “presentation attacks”, wherein a static deepfake image on a screen is offered up for verification by a camera during the ID confirmation process. However, Gartner’s research indicates that “injection attacks,” involving the insertion of a live image within the ID verification application, increased by up to 200% in the first nine months of 2023. Such attacks, says Khan, are “harder to carry out, but also potentially harder to detect, as well.”

A man observing a masked reflection of himself in a hand mirror, used to illustrate a story about deepfake detection.
Enterprises would be well-advised if they paired deepfake detection methods with other types of ID verification, like behavioural analytics. (Photo by Shutterstock)

Deepfake detection unlikely to be used in isolation in future

Multiple forms of deepfake detection have been proposed in recent years, though most mainstream solutions tend to fall within the categories of active liveness detection, wherein the user is asked to act on a physical prompt like turning their head, or passive liveness detection methods that detect signs of life in a still image like the flow of blood in capillaries close to the surface of the skin. Both have their merits, says Khan, though preventing all deepfake attacks from succeeding will probably require pairing facial verification methods with other methods like device profiling. 

“In that instance, you might then detect an attack, but it may not be because you detected the deepfakes,” advises Khan. Rather, he says, it may be “because you’ve detected something else around that which makes you aware of suspicious behaviour,” he says. 

Content from our partners
Rethinking cloud: challenging assumptions, learning lessons
DTX Manchester welcomes leading tech talent from across the region and beyond
The hidden complexities of deploying AI in your business

Read more: Deepfakes are being democratised – and getting harder to detect

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.