AI has been making waves for years now. It has moved from the pages of science fiction into the control rooms of our defence and security agencies and critical infrastructures, watching over networks and inspecting traffic.
Why wouldn’t it be when it promises to work faster than humans and spot the tiniest anomaly possible? For defence and cybersecurity, where every second counts, this is a tempting offer.
The Transition to AI
The push toward AI stems from the needs and demands of the cybersecurity space. Threats are now multiplying at a rate that human teams alone cannot keep up with. While there are traditional tools, they rely on known patterns and cannot keep pace with the fast-moving and highly adaptive adversaries.
This is where AI comes in. With its system that can learn and adapt the environment, spot patterns, and flag suspicious behaviours, AI truly promised something different. Unlike before, teams can now predict and prevent threats from happening.
How It Works
Most AI systems used in defence are using a combination of machine learning, behavioural analytics, and automated response tools. Through these, it can scan enormous amounts of data in real time, flagging anomalies or suspicious behaviour. As such, it can cut critical minutes from the response time.
As emphasised in our 2024 report, security teams can leverage AI to close the skills gap, speed up threat detection, and automate incident response.
The Role of Human Expertise
Even the best AI system is still just a tool. While it offers new and advanced features, AI is not a plug-and-play replacement for human judgment. Defence analysts still need to review its findings, decide on the next steps, and apply judgment that is best and appropriate for the context.
Also, teams cannot just trust the calls made by AI. Yes, it speeds up the response time, but it cannot easily explain how it reached a conclusion. Thus, it’s best if teams let AI handle the repetitive tasks while humans keep control of the critical decisions.
The Risks of Overreliance
AI can be a valuable ally, but relying on it too much also creates problems. Automation bias or our tendency to rely on an automated system can lead to missed or mishandled incidents.
And it’s not only the defenders making use of AI. Adversaries are also leveraging the tool to keep up their game, creating tools to slip past detection or even feed false data into defensive systems. Without proper use and, most importantly, human oversight, the tool that is meant to protect us could also be turned against us.
Finding the Balance
While the tool gives defence teams an edge, it isn’t a magic fix. It is most effective when it is viewed and used as a partner and not a remedy for all issues. Stated otherwise, AI is indeed powerful, but without the right people guiding it, it’s just another system waiting to be outsmarted.