Why Deepfakes Are A Growing Threat To Financial Services
Posted: Thursday, May 29

i 3 Table of Contents

Why Deepfakes Are A Growing Threat To Financial Services

Australia’s financial services sector is confronting a fast-emerging and deeply insidious threat: deepfakes.

Once considered digital curiosities or entertainment gimmicks, deepfakes are now formidable tools in the arsenal of cybercriminals. They are synthetic media created using artificial intelligence to mimic real voices, images, and video, and they are becoming increasingly difficult to identify.

The surge is sending alarm bells across boardrooms and cybersecurity teams. A staggering 77% of financial institutions now anticipate deepfake fraud to be one of their most critical cybersecurity challenges within the next three years. With identity as the cornerstone of all financial transactions, the implications of such deception are both economic and reputational.

A new era of fraud

Recent incidents underline the severity of this technological leap in crime. In one high-profile case, an executive authorised a $25 million payment after participating in a video call, unaware that every other participant was a deepfake simulation of senior colleagues. In another, a software firm discovered that 15% of its newly hired developers had used deepfaked identities during remote screening interviews.

These examples demonstrate the expanding capabilities of generative AI and the ease with which it can be weaponised. For the financial sector, where trust, verification, and rapid digital interaction are essential, deepfakes represent a direct attack on its operational core.

A wider attack surface

The digitisation of financial services has brought consumers unprecedented convenience, from online banking to instant loans. However, this evolution also widens the attack surface for fraudsters.

Traditional verification mechanisms, such as passwords or even biometrics, can be bypassed by convincing deepfake impersonations.

Fraudsters increasingly exploit social engineering tactics, using deepfakes to manipulate employees or clients into disclosing sensitive data or executing transactions. Deepfake schemes target every point of interaction: loan approvals, client onboarding, account recovery, and high-value transfers.

In this environment, identity verification becomes not just a regulatory checkbox, but a frontline defence.

Reinventing identity in the age of AI fraud

Combating such advanced fraud requires a radical rethink of identity management. Industry experts argue that legacy systems, reliant on static credentials or simple biometric checks, are no match for modern threats.

Instead, institutions are turning to multi-layered approaches, starting with ‘liveness detection’. This is a technology that ensures a real person is present during identity verification by analysing micro-movements and physiological signals.

In one documented case, a bank successfully blocked a fraudulent loan when liveness detection flagged an AI-generated video during ID verification.

Another key defence is the use of ‘verified credentials’ which are cryptographically secured digital IDs stored in encrypted wallets. These tamper-proof identities ensure that user information cannot be falsified or manipulated, even if presented in a sophisticated deepfake.

Complementing this is ‘decentralised identity’ infrastructure, where users retain control over their personal data rather than relying on vulnerable central repositories. By sharing only essential identity attributes (such as age or employment status) when necessary, individuals and institutions reduce exposure and limit what fraudsters can exploit.

Adaptive defence and dynamic authorisation

Beyond initial identity checks, institutions must also adopt continuous, adaptive monitoring of behaviour. Adaptive authentication tools monitor a user’s normal patterns – such as typing style, location, and device usage – and can automatically trigger additional security checks if anomalies are detected.

For instance, a high-value transfer request made from an unfamiliar device outside of regular working hours may be blocked or escalated for review. This real-time responsiveness is vital in a world where deepfakes can convincingly mimic authority figures and deceive even experienced staff.

Another powerful layer is policy-based access control (PBAC), which uses dynamic rules to manage access to systems and data. Access is granted or denied based not just on who the user claims to be, but also on contextual signals such as time, location, and device.

Building a resilient future

To defend against deepfakes, financial institutions must treat identity and access management (IAM) as strategic infrastructure, not just a compliance function. Priorities include:

  1. Modernising identity verification: Implementing liveness detection, biometrics, and verified credentials for real-time, reliable verification.
  2. Enhancing authorisation controls: Deploying PBAC and adaptive authentication to ensure only legitimate users gain access under appropriate conditions.
  3. Integrating verified credentials across customer journeys: Empowering users with secure, shareable digital IDs that reduce exposure to identity theft

A continuous battle

As AI continues to evolve, so too will the tools of cybercriminals. Deepfake fraud is not a passing trend but rather a persistent and evolving threat.

Financial institutions must adopt a posture of constant vigilance and innovation, investing in technologies and strategies that anticipate and neutralise new attack vectors.

In this arms race between defenders and fraudsters, the institutions that succeed will be those that fuse human intelligence with cutting-edge technology, creating a digital fortress rooted in identity integrity and adaptive risk management.

Johan Fantenberg
Johan Fantenberg is Product and Solution Director at Ping Identity and has more than 30 years of experience in the IT, telecommunications and financial services markets. During this time he has worked with iconic and industry defining companies such as Ericsson and Sun Microsystems as well as engaged with a variety of partners such as system integrators and software vendors. Johan has been active in international standardisation efforts, architecture development, solution design and delivery, and contributed to closing significant multi-year deals, establishing ongoing partnerships and identifying new market opportunities. He enjoys disruptive technologies, seeking out new business models, interacting with start-up companies and formulating strategies, architectures and approaches that disrupt the status quo.
Share This