Introduction
Australia has seen an increasing procession of deepfakes causing real-world concern. An Australian man lost $80,000 in cryptocurrency after viewing a deepfake video of a public figure encouraging an investment that was in fact a scam – a common nefarious early use for the technology. Deepfakes have also entered Australian politics in recent months, and there are concerns – among politicians, academics and think tanks alike – that the prevalence of the threat will increase considerably in the coming year.
However, itโs the case of a corporate finance team member being duped into paying out $200m HKD – almost $40m Australian – by a deepfake of their employerโs own chief financial officer that has really captured enterprise attention about the threat that this technology poses. The attack was both elaborate and sophisticated, involving the synthesis of a fake multiparty video conferencing call to check the veracity of the CFOโs apparent funds transfer request.
If this could happen in a large multinational organisation, the question asked by Australian organisations is: could this also happen to us?
They didnโt have to wait long to get an answer – with 52% of 700 global IT decision-makers surveyed by Ping Identity, including from Australia, saying they were not confident they could spot a deepfake of their CEO.
This is both an indictment of the gravity of the current situation and of the challenge that organisations face in bolstering their ability to defend against AI threats.
The situation has reached this point because of rapid advancements in AI technology. Itโs allowed the creation of deepfaked videos that can imitate gestures and facial expressions, and clone voice and tone with as little as 15 seconds of real audio. This has made it harder to distinguish between actual and fraudulent content – and is causing people to question the validity of everything they see and hear.
Things could get worse before they get better. Pingโs survey found that 54% of respondents are very concerned that AI technology will increase identity fraud, and 41% expect cybercriminalsโ use of AI to significantly increase identity threats over the next year. In addition, some 48% of respondents are not very confident they have technology in place to defend against AI attacks.
The question becomes: What can Australian organisations do to defend against the increasing emergence of these kinds of AI-enabled threats?
Set Clear Guidelines
Thereโs an immediate requirement for concise policies and guidelines to be set around how executives communicate within their organisations. The Hong Kong example illustrates the need for clarity of these processes.
Clear instructions should be developed by CEOs and their teams to assure staff members that they wonโt be asked to fulfil unusual or unexpected requests. In the event an unusual request is received, it should be clear to employees how to report it, and to who, so that it can be verified in accordance with internal policies.
One should not expect only deepfakes of C-Level executives. Threat actors will likely change over time from focusing on CEOs or CFOs to virtually posing as other staff members, from frontline managers to business unit leaders. Organisational preparedness is non-negotiable as the threat of anyone in the organisation being deepfaked becomes a reality.
Verify, Verify, Verify
To prevent falling victim to a deepfaked video or likeness, it is recommended that organisations employ a multi-layered approach to authenticating someoneโs identity internally. Attackers that use deepfakes are relying on a lack of checks and balances in the authentication environment. A single form of authentication is not an effective deterrent.
To stay ahead, businesses should implement multiple layers of authentication. A holistic approach to identity, which involves authentication, authorisation and governance working together, along with layered intelligence and AI, will effectively counter todayโs threats.
They are also well-advised to continuously analyse contextual risk signals, and consider strong multi-party approval processes for significant business transactions, among other measures.
With some or all of these options available, it becomes possible to verify any unusual requests received internally more stringently, and gives privileged internal users – such as those in charge of finance – more tools to ensure that they can carry out their business-critical work safely.
Keep Refreshing and Building Internal Skills and Capability
Awareness training is important across a range of cybersecurity domains, and has a role to play in giving people confidence to recognise the warning signs of a deepfake campaign or other identity theft techniques being used against them.
Regular reinforcement training ensures that employees are less likely to become complacent and leave the organisation vulnerable to deepfakes and other digital security threats as a result.
Ongoing cybersecurity awareness training is also essential to foster an environment where staff feel encouraged and able to report anything suspicious that they may receive. This collective alertness reduces the risk of a deepfake or other AI-enabled identity threat being able to execute.