How deep is the depth of the deepfake?
Posted: Thursday, Sep 28

i 3 Table of Contents

How deep is the depth of the deepfake?

Use of deepfakes has risen dramatically over the last two years. We’ve witnessed celebrities push back over their likeness being used to mock or embarrass. But it doesn’t stop there. The line between entertaining and dangerous is being crossed more and more, with deepfakes leaning towards humiliation, degradation, fraud, and abuse.

The Australian Government eSafety Commissioner defines a deepfake as a digital photo, video, or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something they did not actually do or say.

The technology used is artificial intelligence (AI) that employs machine learning (ML) algorithms to generate realistic content. Although deepfakes are most commonly associated with images and videos, deepfake technology can also manipulate audio and text.

The technology in and of itself is not inherently harmful, in fact, there is a great irony that exists as AI and ML capabilities are broadly – and increasingly – relied on to detect and combat malicious deepfakes. However, it’s important we understand how deepfakes being used maliciously can have a dangerous impact on individuals, organisations, governments, and nations.

Deepfake videos and images can take various forms ranging from harmless pranks like replacing a colleague’s picture with a celebrity’s face to more malicious applications. We’ve seen celebrity likenesses used for advertisements and product endorsements the celebrity may not be legitimately associated with, and it’s getting more and more difficult to tell the difference. More concerningly, leveraging video or manipulation technology allows threat actors to target individuals for smear campaigns or spread disinformation by releasing “proof” of actions or incidents that never actually occurred to exploit for nefarious purposes.

We watched this play out in front of our very eyes in March of 2022 when footage emerged of Ukrainian President Volodymyr Zelenskyy instructing his countrymen to lay down their weapons in an apparent surrender. Though the video’s authenticity was swiftly questioned, it raises the question of how long it will be until the technology will become more widely available and easier to use more convincingly.

Deepfake audio is a growing concern to voice-based authentication systems.  Threat actors can obtain voice samples from various sources and manipulate them to deliver scripted content. As it stands, the process of creating deepfake audio is too time-consuming to execute real-time attacks, but we should remain vigilant of deepfake audio technologies as they continue to advance.

The other, less publicised, application of the deepfake is those that are textual and appear to be authored by an actual person. These can be used as part of a social media manipulation, the general purpose of which is to disseminate fake news and disinformation on a large scale, creating the deceptive perception that many individuals across various platforms share the same belief. These can also be used to finesse bot-generated responses, such as those used by interactive chatbots, to deceive the other party into believing it is an interaction with a human, rather than a bot.

According to the US Department of Homeland Security, the first reported malicious application of deepfake technology was used to create pornography. Since then, there have been significant advancements in – and adoption of – the technology, how it is used, and for what purpose.

It ranges from the individuals being targeted by threat actors acquiring sensitive personal information including voice samples, and then developing a deepfake audio to bypass authentication systems or create a complex phishing or extortion scam, to the incredibly destructive uses in cyber influence campaigns to create convincing videos and images that sway public opinion.

The use of deepfakes mirroring political personalities and promulgating misinformation is a growing and frightening concern. The high-profile cases, such as the aforementioned Zelenskyy video, show how easily a fake message can be delivered to millions across the world.

Of further concern for individuals, commercial organisations, governments, and those responsible for their security is how to discern a true statement from a statement engineered by a threat actor to achieve an effect. Whether it is used to humiliate an opponent, or has more dangerous geo-political ramifications.

Many of the deepfake examples we’re exposed to on a day-to-day basis pique our interest, more than frighten because it’s easy to find humour in these examples if you don’t delve too deeply. Such an instance occurred in 2020, when a deepfake was circulated showing controversial politician Pauline Hansen speaking with the now Leader of the Opposition, then Minister for Home Affairs, Peter Dutton synched with script from the classic Simpsons ‘Bear tax’ episode.

While the intent of the ‘Bear tax’ deepfake was to create a humorous skit, more concerning subterfuge has occurred since then. During March of 2023 a Belgian man took his own life after worrying conversations with a chatbot. Reports indicate the chatbot did not attempt to assuage the man away from such action throughout the course of the conversations, and likely motivated him to ultimately end his life.

The exponential rate at which AI and its subsets continue to learn and evolve is something that has yet to be quantified or fully understood and as such, we haven’t yet reached the depths of deepfake impact. But, we’re getting close.

Ben Gestier
Share This