Introduction
The arrival of generative AI and its seemingly endless possibilities also came with a myriad of sophisticated threats. And as organisations move to increasingly complex digital environments and struggle to analyse the exponential growth of security data, SIEM solutions are evolving to harness the power of generative AI. Yet, given the potential risks, many businesses are stuck on the question: to implement or not to implement generative AI?
To answer that question, weโll need to look at multiple factors.
The Evolution Of SIEM
To visualise what the future could look like with generative AI, we need to take a quick trip to the past. In response to growing network traffic, SIEM first came on the scene in the early 2000s. It was the first-time security practitioners combined information and event management in one comprehensive strategy. To keep up with the changing landscape, SIEM evolved to meet the need for a tool to identify genuine threats in real time. Gathering and sorting through thousands of security alerts generated by other security toolsโfirewalls, antivirus software, and intrusion detection systems (IDSes)โwas revolutionary.
Machine learning (ML) has been used in security tools for quite a while, first in anti-malware tools and in broader anomaly detection for our networks and users. Powerful anomaly detection has been a cornerstone of SIEMโs evolution, but it has also meant that in modern environments, SIEM has become more akin to an alert factory than a helpful tool. You get an alert: โevent identified.โ How you respond to it is another matter altogether. That responsibility falls to security professionalsโof whom there is a shortage. Now, weโre attempting to bridge that gap with modern security analytics and generative AI.
The Skill Shortage: A Cybersecurity Vulnerability
The cybersecurity workforce shortage has risen to a record high of just under 4 millionโdespite the total global cybersecurity workforce growing by almost 10% in the last year. The number of cybersecurity professionals just canโt keep up with the increasing demand for their skills.
The reasons for the shortage are layered. Existing cybersecurity professionals are seeing more complex workloads, smaller teams, and lower budgets combined with an increasingly dangerous threat landscape and complicated regulatory and compliance protocols. Smaller budgets also significantly impact the ability of teams to bring on new entrants to cybersecurity and build their organisational pipeline. Additionally, to get into cybersecurity, thereโs a misperception that practitioners need to have a technical background, when that is not always the case. This discourages people who come from diverse and nontraditional backgrounds but might make top-notch security analysts.
With generative AI, organisations can help bridge the gap of the labour shortage while facing an evolving threat landscape. By combining generative AIโs data processing capabilities with proprietary data served up by a powerful search engine through retrieval augmented generation (RAG), you no longer need specific domain knowledge to perform certain business-critical tasksโAI does that for you.
Ultimately, a generative AI-powered conversational search experience means that security teams can embrace the diversity that proves to be their strength. Empowered by access to technical knowledge and capabilities through generative AI, a wider range of professionals are suddenly able to take on cybersecurity roles.
How Generative AI Can Work For Your Cybersecurity Team
You cannot protect what you cannot see. In modern, distributed environments, data volumes continue to expand. Therefore, the lack of cross-stream visibility is the biggest challenge facing security professionals. While a unified data platform is vital to address this challenge, generative AI combined with search technology changes the way that IT, cybersecurity, and business users interact with their data across channels.
Generative AI brings conversational search capabilities to organisations. In a security context, this capability can help improve visibility, analytics, and response speed. Whether automated for background analytics or used as a searchable knowledge repository, generative AI enhanced with proprietary data is a powerful tool for a variety of security use cases.
Hereโs how generative AI can work in cybersecurity:
- Force multiplier: Generative AI acts as a force multiplier of existing cybersecurity professionals while making it more accessible to junior analysts through natural language. Analyst learnings can easily be accessed by other analysts through natural language rather than code or mathematics.
- Data synthesis: Generative AI can synthesise and analyse vast amounts of threat data, compensating for the limited number of human threat analysts.
- Better detection ability: Models can significantly improve the detection of anomalous behaviours within processes, not just with a single user or device.
- Predictive analysis for proactive defence: Generative AI can better predict and identify potential security vulnerabilities, offering solutions before human experts are even aware of the threats.
- Automated reporting: Generative AI can provide automated feedback and learning for everyone, ensuring todayโs data and insights can be used in the future.
The power of natural language search for improving security resilience cannot be underestimated. Back to a common security dilemma: an alert goes off and an event is detectedโwhatโs next? In this scenario, a security professional assisted by generative AI can ask the AI assistant to pull the relevant information, best practices, and recommended actions for the next steps. By obtaining a response from a comprehensive context that includes both public and private data related to the issue, practitioners can reduce their time to response and resolution.
The Challenges Of Using Generative AI For Cybersecurity
While generative AI is a powerful tool, it also comes with its challenges. One of the challenges of using generative AI for cybersecurity is the possibility of hallucinations. How can practitioners trust that the outputs generated by the AI are factual and relevant? RAG is one solution. Added context can lead to fewer errors. However, even this isnโt a perfect solution.
You still need a human to ask the right questions. Though generative AI promises to help alleviate the skills gap and the personnel shortage, you cannot remove people from the loop. A fully functioning threat detection, investigation, and response (TDIR) process must already exist for generative AI to supplement it. AI is not a stand-in for a security operations centre, itโs an assistantโan accelerant.
The Future Of Generative AI In Cybersecurity
The Elastic Global Threat Report found that with enterprises transitioning to cloud-based environments, threat actors are taking advantage of misconfigurations, lax access controls, unsecured credentials, and a lack of the principle of least privilege (PoLP). The speed and cover with which threat actors move is also increasing. With the current shortage of cybersecurity professionals, generative AI will act as a tipping scale. If properly implemented and used by cybersecurity professionals, generative AI has the power to counter the attack advantage of malicious actors.
As such, generative AI is undoubtedly shaping the future of the cybersecurity workforce and cybersecurity as we know it. The technology redefines the skill sets and roles within cybersecurity teams for the future. The ability to do more technical work is now available to users who donโt yet have those skills.
So, should you implement generative AI at your organisation? My perspective is that, yes, you should be looking into it. Arming your analysts with the tools to counter the skills shortage and protect against the borderless threat landscape will be essential to protecting your organisation as that landscape continues to evolve.