Cyber teams are currently stretched, and cyber-attacks are becoming increasingly sophisticated and frequent. Thereโs a compelling need to better leverage Artificial Intelligence (AI) capabilities in cyber: automating threat detection, analysing vast amounts of data for anomalies, and responding to potential breaches in real-time. As well as providing a more effective response, this also frees up capacity within the cyber workforce for proactive and high-value initiatives.
But one of the challenges is that the data needed to run effective cyber defence algorithms is very fragmented. Although artificial intelligence (AI) is everywhere, from computers and firewalls to toasters and toothbrushes, it is often a “black box”, and frequently operates independently and discretely with little to tie it together.ย This obstructs its successful application in cyber: small independent pockets of data, frameworks and algorithms automate cyber processes in isolation of each other.
Disconnected cyber capabilities
Despite the potential of AI in cyber, many organisations face the challenge of having AI capabilities that are disconnected and isolated. These “pockets” of AI, often developed for specific functions like malware detection or network monitoring, donโt communicate or integrate effectively with each other.
This lack of integration results in inefficiencies and a fragmented approach to cyber security, where threats are missed and valuable insights are lost or delayed.
For example, consider a business that uses separate AI-powered cyber tools for detecting phishing emails, monitoring network traffic and identifying endpoint vulnerabilities. If these tools operate independently, they may miss coordinated attacks that exploit weaknesses across these areas.
An example of this is the 2013 data breach experienced by US retailer Target where 40 million credit card numbers were stolen. Although Target had multiple security tools in place, they werenโt integrated. So while the system flagged the malware used in the breach, the alert went unnoticed because it wasn’t escalated properly across the rest of the fragmented security architecture. As a result, Targetโs security team didn’t fully investigate and respond in time to prevent the attackers from moving further within the network.
Similarly, the 2017 Equifax Breach, in which personal information of nearly 140 million people, went unaddressed for months because a critical certificate hadnโt been renewed on one of the multiple security tools it used. This meant that encrypted traffic wasnโt being inspected. The lack of a unified system to ensure comprehensive patch management and vulnerability scanning allowed attackers to exploit the gap.
By integrating AI tools and systems it becomes much easier to detect complex, multi-vector attacks. If the phishing-detection AI identifies a suspicious email, it can alert the network-monitoring AI to look for related traffic platforms. This comprehensive approach is vital for effective cyber security.
Strategies for integration
How can we understand, plan and architect the many independent pockets of AI-powered cyber capability deployed across our digital ecosystems so that they work together and share insights?
The first step is through a unified data platform. This collects and consolidates data from different sources and AI systems, enabling AI tools to analyse it cohesively. All AI tools are then working with the same data set and not making isolated decisions.
Itโs also vital to establish interoperability standards for AI systems, to facilitate communication and data sharing between different AI tools. The Open Cybersecurity Alliance (OCA) aims to create a common language and framework for cybersecurity tools, allowing them to interoperate without the need for custom integrations.
AI tools must have centralised AI management under a single platform or system. This means carefully selecting technology providers to ensure you can achieve a truly integrated outcome with your cybersecurity architecture.
Another way to enhance cybersecurity effectiveness is by developing collaborative AI models that share insights and learnings. For example, federated learning is a technique where AI models learn from data across multiple locations without sharing the data itself. This enables organisations to benefit from collective intelligence while maintaining data privacy.
As well as the technology side, the people side is also critical. Creating cross-functional teams that include AI experts, cybersecurity professionals and IT staff will ensure that AI tools are developed and deployed with a holistic view of an organisation’s security needs. Leadership is important in driving this integration and aligning AI, cybersecurity and IT teams. Cyber is ultimately a business problem, not an IT problem, and requires the collaboration of staff across all functions.
AI can be a powerful enabler for cyber and risk-related decision making but it needs to be considered purposely and holistically in order to realise its potential. Fragmented systems cannot work effectively and create vulnerabilities and gaps in defences that cyber attackers can exploit.