Use of deepfakes has risen dramatically over the last two years. Weโve witnessed celebrities push back over their likeness being used to mockย or embarrass.ย But it doesnโt stop there. The line between entertaining and dangerous is being crossed more and more, with deepfakes leaning towards humiliation, degradation, fraud, and abuse.
The Australian Governmentย eSafety Commissionerย defines a deepfake as a digital photo, video, or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something they did not actually do or say.
The technology used is artificial intelligence (AI) that employs machine learning (ML) algorithms to generate realistic content. Although deepfakes are most commonly associated with images and videos, deepfake technology can also manipulate audio and text.
The technology in and of itself is not inherently harmful, in fact, there is a great irony that exists as AI and ML capabilities are broadly โ and increasingly โ relied on to detect and combat malicious deepfakes. However, itโs important we understand how deepfakes being used maliciously can have a dangerous impact on individuals, organisations, governments, and nations.
Deepfake videos and images can take various forms ranging from harmless pranks like replacing a colleagueโs picture with a celebrityโs face to more malicious applications. Weโve seenย celebrity likenesses used for advertisements and product endorsements the celebrity may not be legitimately associated with, and itโs getting more and more difficult to tell the difference.ย More concerningly, leveraging video or manipulation technology allows threat actors to target individuals for smear campaigns or spread disinformation by releasing โproofโ of actions or incidents that never actually occurred to exploit for nefarious purposes.
We watched this play out in front of our very eyes in March of 2022 when footage emerged of Ukrainian Presidentย Volodymyr Zelenskyyย instructing his countrymen to lay down their weapons in an apparent surrender. Though the videoโs authenticity was swiftly questioned, it raises the question of how long it will be until the technology will become more widely available and easier to use more convincingly.
Deepfake audio is a growing concern to voice-based authentication systems.ย Threat actors can obtain voice samples from various sources and manipulate them to deliver scripted content. As it stands, the process of creating deepfake audio is too time-consuming to execute real-time attacks, but we should remain vigilant of deepfake audio technologies as they continue to advance.
Theย other, less publicised, application of the deepfake is those that are textual and appear to be authored by an actual person. These can be used as part of a social media manipulation, the general purpose of which is to disseminate fake news and disinformation on a large scale, creating the deceptive perception that many individuals across various platforms share the same belief. These can also be used to finesse bot-generated responses, such as those used by interactive chatbots, to deceive the other party into believing it is an interaction with a human, rather than a bot.
According to the US Department of Homeland Security, the first reported malicious application of deepfake technology was used to create pornography. Since then, there have been significant advancements in โ and adoption of โ the technology, how it is used, and for what purpose.
It ranges from the individuals being targeted by threat actors acquiring sensitive personal information including voice samples, and then developing a deepfake audio to bypass authentication systems or create a complex phishing or extortion scam, to the incredibly destructive uses in cyber influence campaigns to create convincing videos and images that sway public opinion.
The use of deepfakes mirroring political personalities and promulgating misinformation is a growing and frightening concern. The high-profile cases, such as the aforementioned Zelenskyy video, show how easily a fake message can be delivered to millions across the world.
Of further concern for individuals, commercial organisations, governments, and those responsible for their security is how to discern a true statement from a statement engineered by a threat actor to achieve an effect. Whether it is used to humiliate an opponent, or has more dangerous geo-political ramifications.
Many of the deepfake examples weโre exposed to on a day-to-day basis pique our interest, more than frighten because itโs easy to find humour in these examples if you donโt delve too deeply. Such an instance occurred in 2020, when aย deepfake was circulatedย showing controversial politician Pauline Hansen speaking with the now Leader of the Opposition, then Minister for Home Affairs, Peter Dutton synched with script from the classic Simpsons โBear taxโ episode.
While the intent of the โBear taxโ deepfake was to create a humorous skit, more concerning subterfuge has occurred since then. During March of 2023 aย Belgian manย took his own life after worrying conversations with a chatbot. Reports indicate the chatbot did not attempt to assuage the man away from such action throughout the course of the conversations, and likely motivated him to ultimately end his life.
The exponential rate at which AI and its subsets continue to learn and evolve is something that has yet to be quantified or fully understood and as such, we havenโt yet reached the depths of deepfake impact. But, weโre getting close.