Understanding the Growing Threat Posed by Deep Fakes
Posted: Friday, Apr 05

i 3 Table of Contents

Understanding the Growing Threat Posed by Deep Fakes

As more businesses identify new use cases for artificial intelligence (AI), delivering significant benefits, the technology is also being used to create a concerning security threat: deepfake cyberattacks.

Deepfake attacks leverage AI to create new identities, steal the identities of real people, or impersonate real people. They are then used by cybercriminals to gain access to assets, including privileged information or money.

While deepfakes have traditionally involved the creation of falsified images or documents, they are now taking the form of real-time audio or video calls. Targets can be tricked into interacting with someone who they believe to be real but is actually an AI-creation.

Increasingly Convincing

As the power of AI tools continues to grow, deepfakes will continue to become more convincing and difficult for humans to detect. Cybercriminals are using AI algorithms to superimpose one face onto a completely different person’s body and are even able to create hyper-realistic audio and video content in real time.

One recent example[1] was an attack on a Hong Kong-based company during which an employee was duped into thinking they were part of a video conference call with the company’s London-based chief financial officer.

The fake CFO asked the employee to make a number of transfers into a series of bank accounts, controlled by the cybercriminals behind the attack. The staff member agreed resulting in a loss to the company of $US25 million.

Mitigating Deepfake Attacks

Faced with what is going to become an increasing cybersecurity issue, there are seven key strategies that organisations can follow to reduce their chances of falling victim to an attack. These strategies are:

  1. Implement a zero trust strategy:
    By adopting zero trust, organisations can better prevent and mitigate identity-based attacks. Core concepts that should be embraced include continuous, context-based authentication and monitoring, and the enforcement of least privilege. These should be paired with strong, tested policies.
  2. Undertake regular deepfake pen testing:
    A good way to prepare for a deepfake attack is to use deepfakes in penetration (pen) testing and training exercises. Deepfake video and audio pen testing is a potential method ethical hackers can employ to assess vulnerabilities in workflows and educate organisations about the risks associated with manipulated media.This technique typically uses artificial intelligence to create realistic, yet entirely fabricated, videos or audio recordings to mimic real individuals saying or doing things that are fictious as a part of a social engineering campaign.

    By demonstrating the potential for misinformation and deception through deepfakes, organisations can better understand the importance of implementing additional security and policy controls to safeguard against human exploitation.

  3. Conduct regular training and education:
    Implementing ongoing cybersecurity training for all employees is critical. According to industry research[2], in 2023, 74% of all breaches included a human element with people being involved either via error, privilege misuse, use of stolen credentials, or social engineering.Educating employees so they are empowered to be vigilant in recognizing and thwarting such manipulative tactics is a critical line of defence, especially when deepfake technology may be involved.
  4. Deploy multi-factor authentication:
    When implemented effectively, multi-factor authentication (MFA) adds an important layer of protection against unauthorised access and provides confidence in the authenticity of an identity.Sign-in policies and conditional access policies should be in place to ensure users must re-authenticate from the right device, location, or network to conduct privileged activities.
  5. Adopt privileged access management:
    Privileged access management (PAM) is a foundation of both zero trust and identity security. No identities and accounts are more imperative to secure than those with privileged access to systems, data, applications, and other sensitive resources.It is vital organisations employ privileged access security tools that help them understand where privileged roles exist, onboard those privileged accounts for management, and enforce least privilege.
  6. Create policies for collaboration tools:

While deep fakes are getting increasingly believable, proper policies for collaboration tools such as Microsoft Teams, Webex or Zoom will add an extra line of defence against such attacks.   These policies will need to include user authentication and validation in conjunction with Identity Threat Detection and Response (ITDR), and additionally the usage and installation of collaboration tools on user workstations.  Implementing an Endpoint Privilege Management (EPM) solution will enable organisations to apply application control safeguards to ensure unauthorised tools are not run on workstations.

  1. Embrace identity threat detection and response (ITDR):
    As the ability of humans to identify deep fakes declines, organisations will need to adopt modern technology and strategies, such as ITDR, that can intelligently detect identity-based threats or risks.ITDR capabilities can help organisations proactively mitigate threats by adjusting security posture based on real-time risks, and also quickly respond to and shut-down in-progress attacks to minimise any damage potential. These capabilities are especially important when it comes to mitigating the risks associated with decentralised or external transactions.

Examples such as the Hong Kong deepfake incident highlight the very real consequences that such attacks can have on both individuals and organisations. For this reason, it is vital that security measures be revised and that these strategies be adopted.

AI is going to continue to increase in complexity and power, which means such threats are only going to become more prevalent and convincing. Taking steps now, such as those mentioned above, can reduce the chances of falling victim in the months and years ahead.

[1] https://www.theregister.com/2024/02/05/hong_kong_deepfaked_cfo/
[2] https://www.verizon.com/business/resources/reports/dbir/

Scott Hesford
Scott Hesford is Director of Solutions Engineering for Asia Pacific and Japan at BeyondTrust. He has over a decade of experience in IT security. Before joining BeyondTrust in 2019, he worked as Principal Consultant across APJ for CA Technologies where he specialised on technologies within Identity Governance and Administration, Advanced Authentication, Privileged Access Management, Web Access Management and API management. A trusted cyber security advisor to enterprise and mid-market customers alike, his experience spans across several industries including finance, utilities and manufacturing in addition to state and federal governments.
Share This