Introduction
Generative AI (GenAI) is actively reshaping the way attackers and defenders operate in Australia. Threat actors have weaponised GenAI to synthesise text, code, voice, and video at scale, increasing impersonation scams and malware campaigns.
In response to these evolving threats, security teams have also adopted GenAI tools to automate and streamline their cyberdefences. This dichotomous use of AI is evident in the data. The Australian Taxation Office has reported that impersonation scam reports are up by more than 300 per cent compared with this time last year, a vivid illustration of how cheaply and convincingly threat actors can manufacture seemingly legitimate communications. Meanwhile, Australian organisations are rapidly adopting AI, particularly in the security domain. New data from an OpenText–Ponemon study shows that 51% of Australian organisations have already embedded AI into their IT and business strategies, while more than half (54%) report that reducing AI security and legal risks is “very” or “extremely” difficult. This adoption is increasingly viewed as essential to strengthening detection, response, and resilience in the face of rising attack volumes.
Organisations must scale up their security efforts quickly. Reports indicate many organisations have seen attack volumes double or triple as adversaries integrate AI into their toolkits. At the same time, Australia faces a chronic shortage of skilled cyber professionals, leaving teams without the capacity to deal with increasingly complex workloads and threats.
Making AI Work for Your Cybersecurity Team
Security is essentially a data problem, even if it is enhanced by AI. Real-time access to high-quality, contextual data, both structured and unstructured, is critical to ensure GenAI can bolster cybersecurity defences. Having visibility into its data allows an organisation to swiftly identify and respond to threats on its attack surface, incorporating accurate and complete proprietary data that provides business-specific context. This can reduce false positives, enabling cybersecurity teams to focus on immediate threats and vulnerabilities.
With a unified search platform that combines AI with search technology, an organisation can leverage security solutions that enable detection, investigation, and response at scale without having to move or duplicate its data, vastly enhancing its security posture.
But technology alone is not enough — even the most advanced AI systems rely on skilled people to interpret outputs, validate alerts, and make critical decisions.
Addressing Talent Gaps and Operational Readiness
Generative AI can handle repetitive tasks, such as triage, drafting investigation notes, or correlating alerts, freeing analysts to focus on more complex threats. But it cannot substitute human problem-solving or adversarial creativity. With a chronic shortage of cybersecurity talent in ANZ, organisations must continue investing in their teams through rotational programmes, AI-enhanced training exercises, and ongoing upskilling. Teams that combine automation with expertise build resilience; those that don’t risk magnifying weaknesses.
Regulation and Collaboration
Regulation and public expectations are moving in tandem. Organisations are now more often required to make their AI systems explainable, auditable, and transparent. This involves ensuring analysts can understand why a model raised an alert or suggested a response, that AI decisions are traceable for accountability, and that data sources, model training processes, and governance policies are well-documented. Embedding these practices is critical not only for regulatory compliance but also for building trust with clients, stakeholders, and internal teams.
Collaboration is essential. AI-driven attacks are often repeatable and cross-industry, making the sharing of indicators, behaviours, and defensive playbooks crucial. Platforms that enable fast searching and correlation across telemetry allow teams to operationalise this intelligence in near real-time, raising the baseline for everyone. Procurement practices also need to evolve. Organisations should demand clarity on how AI models are trained, where the data resides, how it is retained, and what checks are in place for bias or error. Boards must understand how AI informs security decisions, how failures are detected, and how vendors are stress-tested. Treating these questions as due diligence separates responsible adopters from those taking unnecessary risks.
In Summary
Generative AI will not make cybersecurity effortless. But if approached with planning and discipline, it can change the balance of power. Organisations that pair automation with rich data, skilled people, and strong governance will detect attacks faster, free analysts for higher-value work, and maintain control even as adversaries scale. Those that adopt AI without structure will be overwhelmed by noise, blind spots, and regulatory challenges. For Australian enterprises, the path is clear. Generative AI will define both attack and defence in the years ahead. Success will go to organisations that treat it as part of a broader strategy — one that values human expertise, prioritises transparency, and builds resilience into every layer of security.