Introduction
A quarter of a century after the Agile Manifesto reshaped the software industry, its legacy is no longer in question.
The methodology’s emphasis on speed, adaptability and user-centric design helped redefine how organisations build and deliver technology, enabling faster releases and a closer alignment with customer needs. For many enterprises, Agile has long since moved from disruptive idea to institutional norm.
Yet as the industry marks this milestone, a new wave of disruption is already testing Agile’s limits. The rapid rise of AI-assisted coding and increasingly autonomous software agents is transforming development pipelines at a pace for which few organisations are fully prepared.
What was once a human-centred discipline, built on collaboration and iterative improvement, is now being augmented – and in some cases overtaken – by machine-driven inputs that challenge established ways of working.
This shift is forcing a rethink of how Agile principles are applied in practice. While the focus on individuals and interactions remains central, it is no longer sufficient on its own. In an environment where AI-generated code can introduce both efficiency gains and new vulnerabilities, integrating security into every stage of development has become imperative.
The convergence of Agile with DevSecOps is no longer aspirational but increasingly unavoidable, as organisations seek to balance speed with control.
The pace of change has been particularly acute over the past two years, leaving many businesses – and their developers – struggling to keep up. Nowhere is the strain more evident than in application security teams, which have historically operated with limited resources.
Faced with an AI-augmented threat landscape, these teams must contend with risks that evolve as quickly as the tools designed to mitigate them, rendering long-standing playbooks increasingly obsolete.
As enterprises grapple with this new reality, the question is no longer whether Agile can endure, but how it must evolve to remain relevant in an era defined by intelligent automation and heightened security demands.
The End of the Road for AppSec?
This poses a significant question. Even as the Software Development Lifecycle (SDLC) is restructured around the most important elements of the Agile methodology with careful, DevSecOps-centric security considerations, is this actually the end of the road for AppSec?
The sheer speed at which agentic AI has permeated the traditional SDLC is mind blowing, and enterprises are adopting and implementing the technology to take care of sensitive processes right now. The Agentic Development Lifecycle (ADLC) is here.
In less than three years, it’s highly likely that most developers won’t be writing any code at all. They will be feature-creators, agent-managers, monitoring and finessing processes while their agent networks power through the actual code creation.
This swift, stark change will inevitably lead to a shortage of security-skilled, competent developers who can both navigate AI technology efficiently and productively and fortify the software they produce while avoiding emerging AI security threats.
Old Solutions Can’t Solve New Security Problems
The rapid ascent of has shattered the traditional security paradigm. IT teams are no longer “just” defending static codebases but instead managing fluid, living systems that evolve through continuous use and learning. Current security practices, built on a foundation of deterministic, signature-based rules, are, by design, ill-equipped for this probabilistic reality.
The central challenge confronting enterprises is that AI is advancing faster than the governance frameworks designed to control it. Traditional software development lifecycle controls, built around periodic scans and manual code reviews, are increasingly ill-suited to an environment where AI-generated code and autonomous agents operate at far greater speed and scale.
This dynamic is exposing a growing visibility gap for organisations, many of which lack the real-time oversight required to understand how AI systems interact with sensitive data. In response, the industry is being pushed towards a more proactive security posture centred on continuous verification rather than retrospective checks.
This shift places greater responsibility on developers themselves, who must be equipped to identify and mitigate the subtle, fast-evolving vulnerabilities that AI introduces, even as the tools they rely on become more autonomous.
Reshaping Application Security
The release of security agents like Claude Code Security signals an undeniable shift in how the industry is, or soon will, reshape its approach to application security.
The reactive-heavy AppSec strategy of the past is all but dead and buried in this new reality, however the actual human security skills gap remains, and this is still the most crucial foundation of a thriving DevSecOps culture of shared security responsibility and impact.
There is no acceptable substitute for highly skilled, security-proficient developers and AppSec practitioners who deeply understand their architecture and codebases and can make solid, organisation-centric decisions as part of their security audits.
Maintaining a clear line of sight over the AI coding tools and model control platforms in use, alongside a realistic assessment of the security capabilities of the developers deploying them, is becoming a critical requirement for enterprises. Without that visibility, organisations risk falling behind as AI-driven development rapidly expands the attack surface, often outpacing the reach of existing security programs almost overnight.






