View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
June 20, 2018updated 27 Jun 2018 2:55pm

The Deepfake Threat

AI-powered "deepfake" software is being weaponised. Where does that lead?

By CBR Staff Writer

To understand a deepfake, remember this: no one sees you like you.

Humans experience an innate adverse reaction when viewing themselves in a mirror or on video because the brain rejects the observed visage as an imposter due to the lateral inversion.

Yet if reflections are disconcerting enough to cause people to nervously fixate on their appearance for hours; imagine the nauseating confusion turned abject horror of staring at a video of yourself doing something that you did not do.

Can you see yourself committing crimes, launching ideological tirades, participating in sexual perversions, etc.? Even if you shed the emotions brought forth by the malicious fabrication, will the public, let alone your friends, family, and coworkers, be as adept at piercing the illusion? In the Digital Age where social media weaponisation is a facet of everyday life, can the false narrative be stymied and will its impacts ever cease?

Parham Eftekhari, Executive Director, Institute for Critical Infrastructure Technology

Already malicious cyber threat actors have developed applications that leverage machine learning to superimpose target images onto media of their choosing. With enough time and effort, and little skill, the products can be as convincing as any other video shared on social media. Software, such as FakeApp, is freely available online and is already popular in script kiddie communities.

As artificial intelligence advances, innovative, but nefarious, software capable of facilitating false narratives will continue to develop. Just as with swatting, doxing, and DDoS attacks anyone can randomly become the victim of a cowardly wannabe-hacker; often due to their appearance, reputation, or personal characteristics.

As the applications evolve into more sophisticated weapons, more advanced and well-resourced adversaries will adopt and further develop the tools so that they can launch character assassinations against key figures, blackmail critical infrastructure personnel, distract and divert media attention, discredit ideological opponents, force conflict between communities, cast doubt on legitimate evidence,  taint meaningful discussions, waste investigators’ resources, or otherwise seize control of the narrative by supplanting reality with a tailored illusion.

Content from our partners
An evolving cybersecurity landscape calls for multi-layered defence strategies
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways

Deepfakes are a Destructive Turing Test

Deepfakes (sometimes referred to as deep fakes) are the product of an artificial intelligence-driven application that conflates target media with existing media to generate a misleading construct capable of deceiving the audience.

The term derives from the portmanteau of “deep learning” and “fake”. The most popular application is FakeApp, which leverages Google’s TensorFlow AI-framework. In a broad sense, deep fakes can refer to the “digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake.” It is a destructive Turing test in that it is a false construct tailored to mislead and deceive rather than emulate and iterate.

Though the software initially popularized due to pornographic fantasies, it has already been weaponized in politics. Comedian Jordan Peele drew attention to the emerging threat by releasing a “public service announcement” as President Barack Obama. A year earlier, the technique was applied to George W. Bush, Vladimir Putin, and Barack Obama in a real-time facial reenactment research study. Six months after the study, FakeApp was released.  At least President Donald Trump, German Chancellor Angela Merkel, and Mauricio Macri have been targeted in amateur deepfakes political attacks since the release of FakeApp.

Deepfakes were meant for personalized attacks. They were designed for degrading pornography featuring celebrities, children, and people’s exes. Deepfakes are already spread on Reddit (multiple forums have been taken down at the time of this writing), Twitter, and various adult sites. As with nearly every technology sophisticated adversaries are already considering how the applications can be weaponized in attacks that impact individuals, businesses, and critical infrastructure.

Social media, cognitive biases, logical fallacies, socionics, ideological bubbles, and group polarization already ensure the persistence and propagation of false claims that should be rejected by most of the audience as existing obviously outside the realm of believability.

Nevertheless, every conspiracy inspires theorists, every ideological attack converts zealots and victims, and every fake news story is internalized and repeated by somebody’s ideologically aggressive family member. Despite any effort, once spread, false narratives gain new life and manifest outside the Internet. Deepfakes will be no different and may prove more influential and impactful.

Deepfake applications leverage machine-learning algorithms (often neural networks), facial mapping software, and harvested images and videos to easily and cheaply hijack someone’s visage and fabricate a false narrative.

The subject practically never permits the use of their appearance or voice. However, with the ubiquity and pervasiveness of dragnet surveillance capitalists, a lack of complicity will never deter attackers. Facebook, Instagram, YouTube and other social media vectors can be utilized to cultivate the material necessary to train the deep fake application. Using facial recognition, voice, and other biometric data is popularizing as a convenient identification and authentication mechanism. That information is digitally stored and can be stolen alongside other PII by hackers.

Even if the target does not have an account on a particular platform, there is a reasonable chance that someone else may have uploaded media or that a quick Google search will reveal the needed input. The Internet never forgets. Once popularized, attackers will have ample input media and generated deep fakes will linger and accumulate. The United States, unlike some other countries, lacks a codified “Right to be Forgotten,” so the chances that the public can remove digitally stored images or videos in anticipation of the emerging threat is minimal and is determined separately by each site.

jacquardDeep Fakes were tailored for Personal Harm

As a result of data retention practices and the animosity of digital threat actors, women have been exploited in embarrassing videos and images with real-world impacts on their reputation, emotions, and relationships. Cyberstalkers view the app as a new tool to torment their victims. Digital-mercenaries may launch deep fake attacks on behalf of rival interests. Meanwhile, cybercriminals might do so for blackmail or ransom. More sophisticated attackers might coerce a victim, such as critical infrastructure personnel, to divulge sensitive information or to act as an insider threat else the deep fake would be released. Even if the false videos were exposed as fraudulent, the cascading harms would likely continue. Entire careers or lives will be ruined .

Deepfakes Threaten National Security and the Cohesion of Democracy.

Faux-viral fake news spreads across social media vectors, unlike any other content. Over the past two years, online trolls have incited mass panic with phony environmental disasters, inspired geopolitical conflict with false stories, and influenced public opinion on a variety of issues with weaponized memes, articles, blogs, and videos. Deepfakes will elevate the pervasiveness and impact of disinformation campaigns by an order of magnitude.

In 1964, 77 percent of Americans trusted the government; due in part to the advent of the Internet, now only 24 percent of citizens trust the American government. Similarly, only half the population trusts businesses. If a foreign adversary created a deep fake of a politician accepting bribes or of a US soldier killing civilians, how might public trust be worsened? On the other hand, how could special interests manipulate public opinion and trust through the strategic deployment of “anonymous leaks” of deep fakes? Political corruption, espionage, police brutality, medical malpractice, collusion, public crises, foreign incidents, terrorist attacks, and numerous other topics are all choice fodder for strategically weaponized deep fakes.

Trust in democracy itself will be threatened as free thought is usurped by engineered reality and as cognitive biases cause a desensitized public to begin to erroneously reject true, but uncomfortable, facts by default. Sophisticated forgeries of “national security intelligence” paired with fabricated audio or video could irrevocably harm government initiatives. The ability to deploy deep fakes to influence entire populations will be a limited capability of sophisticated nation-states agencies; even low-level attackers can generate convincing deepfakes. Over time, the Internet may become so saturated that all media must be assumed fabricated.

What Comes Next?

The popularization of deepfake applications may have more unforeseen impacts than defacement of character. For instance, video and image evidence, which is already tenuous in some legal proceedings, may diminish in perceived credibility. The effect could be troubling because eyewitness accounts, which are unreliable, and forensic expectations, which do not live up to media-hyped public expectations, will correspondingly inflate in perceived significance.

Even increasing the cyber-forensic workforce may not stave off contention of every suit or conviction based on media. In the political realm, satire and targeted attacks will be indistinguishable. Any attacker called out for the propagation of deep fake media may be able in the US to cite the First Amendment and claim that other online users should know better than to trust videos and images shared on social media.

Fake news, disinformation, and misleading narratives are already a pressing problem due to a combination of the nefarious machinations of foreign adversaries and the greed and amoral efforts of some popular media outlets. The popularisation and advancement of deep fake applications will gradually cement the two factions most responsible for the decay of truth and accountability as the governors of the public narrative and perceived reality.

Will Deepfakes Effect You?

For now, deepfake videos can often be detected through misalignments of mouths, teeth, tongues, and tonal cues. Even convincing dupes can be detected by varying the play-rate of the media. US Intelligence is aware of the threat and is developing mitigations. Public solutions such as forensic technology capable of immediately detecting fakes are decades away, and the malicious deepfake technology is advancing much faster. Even if the mitigation strategies matched the pace of the threat, threat actors would incorporate mechanisms to delay or break the forensic detection.

In the legal arena, it remains unclear how much would be protected by the First Amendment and how a defamation suit might apply. US laws will do absolutely nothing to deter foreign and obfuscated adversaries. Social media and other online platforms have no incentive to develop mitigation strategies because they are thoroughly insulated from any liability.

If the technology continues to advance, the public will be shaped by the onslaught of deepfakes. Most people will probably temporarily believe at least one deepfake due to their ingrained cognitive biases and adversarial psychographic targeting.

The norm on the Internet may be to distrust everything. Some citizens might sacrifice their privacy in favor of immutable authentication trails, which would track their location, insure reputations, secure and preserve communications, etc. An entire anti-deepfake industry will be built on the unraveling of consumer privacy. Just as many are pressured to participate on social media, these services could become the social expectation of employers, friends, the community, and even the government.

Data brokers and “digital authenticators” will significantly increase in power as their roles as default information custodians solidify. When combined with their stores of PII, PHI, and psychographic data, third-party data custodians will have unprecedented influence over consumers’ thoughts, actions, and lives. Adversaries will gain some of that influence every time they compromise the systems of negligent data brokers or obtain the data through shell companies, insider threats, etc. Domestic and foreign governments may have unrestricted access to much of the data. The loss of privacy could lead to a complete loss of autonomy due to the manipulations of multiple actors; all because script kiddies began pasting faces into videos.

Note: Credit for the featured image at top is to Adam Fossier on Unsplash

 

 

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU