Radware’s latest threat intelligence report, The Internet of Agents: The Next Threat Surface, delivers a stark warning for cybersecurity leaders. The rise of agentic AI—autonomous agents capable of reasoning, invoking tools, and interacting via emerging inter-agent protocols—marks a fundamental shift in how digital systems operate. But as organisations embrace these technologies, adversaries are quick to exploit the new pathways they create.
Architecture of Risk: MCP and A2A Protocols
At the heart of the issue are the Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication. These frameworks transform AI agents from isolated tools into interconnected nodes that can share context, invoke APIs, access data, and collaborate on tasks. While this unlocks greater automation and efficiency, it also introduces an array of new trust boundaries and vulnerabilities.
The interconnected nature of agent networks creates what Radware calls transitive access chains. For instance, if Agent A has access to a sensitive resource and Agent B can communicate with Agent A, then Resource X may inadvertently become accessible to Agent B. Traditional identity and access management (IAM) models, designed for static systems, are often ill-equipped to map and control these dynamic, shifting trust paths.
Emerging Threat Vectors
Radware highlights several attack vectors that demand immediate attention. One of the most concerning is indirect prompt injection and so-called zero-click triggers. By embedding malicious instructions in documents, websites, or data streams, attackers can trick autonomous agents into executing harmful actions without any explicit user input.
Another risk lies in tool poisoning. AI agents depend on toolchains that often include third-party services. If these tools are compromised—whether through malware or corrupted logic—the agents themselves can be manipulated into carrying out malicious activities.
The report also notes the accelerating pace of exploit creation. Advanced language models like GPT-4 have demonstrated the ability to convert vulnerability disclosures (CVEs) into reliable proof-of-concept exploits in record time. This drastically shortens defenders’ reaction windows, placing added pressure on security teams already stretched thin.
Operational and Defensive Implications
To counter these risks, Radware stresses that enterprises must rethink their defensive posture. AI agents should be treated as privileged entities, with carefully defined scopes, least-privilege permissions, and auditable boundaries of authority. Traditional one-time access controls are not enough in environments where agents continuously adapt and exchange context.
Continuous monitoring of inter-agent activity is another critical priority. Security teams must log and observe what agents are doing—what tools they invoke, what data they access, and how they interact with each other. This visibility will be key to detecting anomalies before they spiral into major incidents.
Red teaming also needs to evolve. Security exercises should include scenarios that simulate prompt injection, chained agent behaviours, and compromised toolchains to better prepare defenders for these emerging risks.
Finally, Radware emphasises that defensive AI is no longer optional. Static, rule-based protections are already being outpaced by adaptive, AI-driven attacks. Behavioural anomaly detection, sandboxing of agent activity, and AI-specific defences are necessary to keep pace with adversaries who are using the same technologies to their advantage.
A Shifting Threat Landscape
The Internet of Agents report makes clear that agentic AI is not just a productivity tool—it is a double-edged sword. The same protocols that enable breakthrough automation and efficiency also expand the digital attack surface in unpredictable ways. For cybersecurity teams, the challenge will be to harness the benefits of AI agents while building resilient defences that anticipate their vulnerabilities.