AI Versus AI: Why The Next Cyber Security Battle Will Be Fought By Machines
Artificial intelligence has evolved from being a technical novelty into a powerful business tool with remarkable speed. Companies are embedding generative AI into customer service, software development, and finance operations to unlock efficiency gains and accelerate innovation. However, as adoption deepens, cyber security leaders are confronting a new reality: AI is not just enhancing productivity but also fundamentally reshaping cyber risk. Indeed, as AI becomes both a growth engine and a threat vector, the fundamentals of cyber security will become increasingly important. Cyber resilience will depend on embedding secure-by-design and Zero Trust principles across the enterprise.
Posted: Tuesday, Mar 03

i 3 Table of Contents

AI Versus AI: Why The Next Cyber Security Battle Will Be Fought By Machines

Introduction

Artificial intelligence has evolved from being a technical novelty into a powerful business tool with remarkable speed. Companies are embedding generative AI into customer service, software development, and finance operations to unlock efficiency gains and accelerate innovation.

However, as adoption deepens, cyber security leaders are confronting a new reality: AI is not just enhancing productivity but also fundamentally reshaping cyber risk.

Recent research highlights the scale of the exposure. A report[1] produced by Check Point Research found that one in every 54 generative AI prompts from enterprise networks posed a high risk of sensitive data exposure, affecting 91% of organisations that regularly use AI tools.

A further 15% of enterprise AI prompts contained potentially sensitive information such as customer records or proprietary code. For boards and risk committees, those figures underscore a pressing governance challenge: the same tools driving transformation may also be opening new attack pathways.

AI Fighting AI

Security analysts increasingly describe the current phase as an inflection point where “AI fights AI”. Phishing campaigns and deepfake scams are no longer isolated threats but stepping stones toward autonomous, self-optimising systems capable of planning and executing multi-stage attacks with minimal human oversight.

For chief information security officers, this signals not an incremental escalation but a structural shift in the threat landscape. Four emerging vectors illustrate how rapidly that landscape is evolving:

  1. Autonomous AI attacks:
    Autonomous AI-driven attacks are on the rise. Criminal groups are experimenting with machine agents that independently conduct reconnaissance, exploit vulnerabilities and extract data in coordinated sequences.These systems can adapt in real time to defensive measures and share intelligence across thousands of endpoints, functioning like self-learning botnets. Early prototypes demonstrate how AI can seamlessly chain attack stages into a continuous workflow.The risk for security operations centres is clear: swarms of adaptive threats operating at machine speed could overwhelm traditional monitoring models.
  2. Adaptive malware fabrication:
    Underground markets now advertise AI-powered malware generators capable of writing, testing and debugging malicious code automatically. Unlike earlier polymorphic techniques that relied on minor code adjustments, generative models can produce entirely new, functional malware variants in seconds.Each failed attempt becomes training data for the next iteration, compressing development cycles and increasing the diversity of malicious software in circulation.
  3. Synthetic identities:
    The insider threat is being reinvented through synthetic identities. Using stolen employee data, voice samples and internal communications, attackers can construct AI-generated personas that convincingly mimic legitimate staff.These digital impostors can send authentic-looking emails, join video calls with deepfake voices and operate within collaboration platforms using accurate linguistic patterns.

As voice cloning improves, identity verification may shift from recognising a familiar voice to analysing behavioural consistency over time. That change challenges long-established models of trust.

  1. The AI supply chain:
    As enterprises integrate third-party and open-source models, they inherit new systemic risks. Researchers have demonstrated that altering as little as 0.1% of a model’s training data can cause targeted misclassification.In a security context, this could mean an intrusion-detection system mislabelling a malicious payload as benign. Model poisoning and compromised dependencies expand the attack surface in ways that may remain invisible until exploited.

A Combination of Capabilities

What differentiates AI-driven threats from earlier waves is the combination of autonomy, scale and learning capacity. Machine-made attacks evolve continuously; every blocked exploit strengthens the next attempt. They also lack the “human fingerprint” of predictable time zones, spelling errors or stylistic quirks further complicating detection and attribution.

At the same time, AI tools are lowering barriers to entry, enabling less-skilled actors to deploy sophisticated campaigns with automated scanning and exploitation capabilities.

Taking a Strategic Approach

A first priority is selecting security-aware AI platforms and tightly managing data exposure. Sensitive files, credentials and production datasets should be excluded from AI environments wherever possible, with testing conducted on sanitised or synthetic data. Access permissions must be carefully scoped to enforce least-privilege principles.

Second, Zero Trust must extend to AI systems. Every API call should be authenticated, AI-to-AI interactions monitored, and AI-generated code subjected to peer review and vulnerability scanning before deployment. Human oversight remains critical to ensure compliance and to detect logic flaws that automated systems may overlook.

Third, supply chain discipline is essential. Each third-party library or dependency introduced through AI-assisted development should be validated, reputation-checked and scanned before integration. As AI accelerates coding workflows, the risk of dependency sprawl increases, making rigorous oversight indispensable.

Cyber security is entering a platform era in which reactive, tool-based defences are insufficient against autonomous adversaries. Enterprises will require integrated, cloud-delivered security architectures capable of predictive analytics, behavioural intelligence and automated remediation.

As AI becomes both a growth engine and a threat vector, the fundamentals of cyber security will become increasingly important. Cyber resilience will depend on embedding secure-by-design and Zero Trust principles across the enterprise.

[1] https://blog.checkpoint.com/security/global-cyber-threats-september-2025-attack-volumes-ease-slightly-but-genai-risks-intensify-as-ransomware-surges-46/

Raymond Schippers
Raymond Schippers is a seasoned cybersecurity executive with over 15 years of experience developing enterprise-wide security programs for global organisations. He has held leadership roles across various organisations, including his current position as Lead Technologist ANZ at Check Point Software Technologies, where he collaborates with strategic customers to develop cybersecurity strategies and operationalise cutting-edge solutions. Previously, as CISO and CTO at Huntabil.IT, Raymond led cybersecurity advisory services, helping organisations transition to intelligence-led strategies and uplift security capabilities. At Canva, he played a pivotal role in building and scaling the Detection & Response group, establishing 24/7 global threat detection, response, hunting, and intelligence teams. He also developed Canva’s initial Cyber Threat Intelligence (CTI) strategy and led incident response activities. Additionally, Raymond has provided strategic guidance as an Advisory Board Member at Sustainabil.IT and worked as a Principal Consultant at Parand.io, helping organisations adopt threat-informed defence postures and operationalise threat intelligence. His extensive experience spans incident response leadership, threat detection, and intelligence-driven security strategies, making him a trusted partner for enhancing organisational security maturity.
Share This