The email looks real. It sounds like your boss. Thinking it might be urgent, you click the link included therein. And just like that, it’s over.
This is the new face of cybercrime. Powered by AI, cybercrime no longer requires technical prowess nor heavy coding. All you need is the right tool and the appropriate prompt; something that can scrape public data and mimic human behaviour.
AI has indeed become a force multiplier. For cybercriminals, this is a dream come true. It’s like hiring a tireless assistant that can exploit vulnerable individuals and organisations anytime and anywhere they want. For organisations, security risks has just become wider and more complex. What used to be sketchy emails are now perfect, polished, clean, and frighteningly personal. The most dangerous part? That same tool watches, learns, and adapts, making new defence strategies more difficult to develop.
Cybercrime has always been about exploiting trust. But AI takes that exploitation to a new level. In a recent interview, Simon Hodgkinson of Semperis cited how AI was used in impersonating a company’s CFO during a video conference call that resulted in the transfer of $25 million to a fraudulent account. This, according to him, is the next scammer’s paradise.
“Deepfakes are incredibly good now. It only takes a few seconds of recorded audio to create a deepfake voice, and not much more to generate video. So I think we’ll see a lot more activity in that space. I’m really concerned because I think this is the next scammer’s paradise,” he warned.
This reveals a new defence gap. Unlike traditional threats and attacks, AI is less about breaking firewalls, but more about breaking confidence… Confidence in an organisation’s communication, system, and leadership. This is something that can’t be corrected by the newest and smartest technologies alone, but a cultural reset from the people inside the organisation.
As repeatedly stated, the mistake organisations still make today is thinking that cybersecurity is an IT team problem. Many boards and executives still lack the literacy to lead when a breach occurs. Meanwhile, non-IT employees still don’t understand what’s at stake during times like this. This needs a shift in conversation.
Instead of asking, ‘What technology do we need?,’ organisations should start asking, ‘Do our leaders understand the risks? Do our employees know what to look out for? Have we built a culture that is ready and resilient?’
So, what should teams do?
First, start at the top. Provide your executives and board members with more than just dashboards and incident reports. Make them understand the space, specifically the risks and the possibilities. That way, they can confidently lead the organisation by making informed cybersecurity decisions when things go sideways.
Second, focus and invest in your people. Don’t just train them, equip them with the basics like spotting phishing emails or social engineering tactics. More importantly, make them understand the ‘whys.’ Why are they being targeted? Why does this email look fake? Why is organisational accountability important in cybersecurity? By doing this, they will know that they are part of the security perimeter of the organisation.
Finally, use AI. If adversaries are leveraging the tool to level up their games, then defenders should also use it to sharpen their defences. Use AI in detecting threats, automating response, and flagging anomalies. With the space no longer defined by firewalls or endpoints but by adaptability, there’s no other choice but to include AI in our defence playbook.