ShadowLeak and the Internet of Agents: The Next Frontier in AI Exploits 
The September 2025 disclosure of ShadowLeak by Radware marks a turning point in the way the security community must think about artificial intelligence. This was not just another prompt injection. It was the first service-side, zero-click indirect prompt injection (IPI) against a widely deployed AI system—OpenAI’s ChatGPT. More importantly, it showcased what happens when attackers […]
Posted: Thursday, Sep 25
  • KBI.Media
  • $
  • ShadowLeak and the Internet of Agents: The Next Frontier in AI Exploits 
ShadowLeak and the Internet of Agents: The Next Frontier in AI Exploits 

The September 2025 disclosure of ShadowLeak by Radware marks a turning point in the way the security community must think about artificial intelligence. This was not just another prompt injection. It was the first service-side, zero-click indirect prompt injection (IPI) against a widely deployed AI system—OpenAI’s ChatGPT. More importantly, it showcased what happens when attackers move from tricking people to tricking autonomous agents. 

Beyond Traditional Prompt Injection

Prompt injection attacks are not new. Security researchers have demonstrated how malicious text hidden in documents or websites can manipulate language models into revealing secrets or producing unsafe content. But until now, the assumption was that such attacks required either user interaction or execution within the client environment. ShadowLeak broke that model. 

By hiding instructions inside an email—white-on-white text, tiny fonts, or metadata—attackers could compromise the assistant’s behaviour without the user doing anything more than asking ChatGPT to “summarise my inbox.” The malicious web request originated directly from OpenAI’s servers, not from the user’s device or network. That meant no logs, no alerts, and no way for enterprise defenders to know data had left. 

Why ShadowLeak Matters to Defenders

The attack’s “zero-click” property removes the human decision point. Security awareness training—so central to defending against phishing—becomes irrelevant. You can’t train an AI agent to “hover over the link” or “pause before clicking.” ShadowLeak abuses the very capabilities that make assistants useful: email access, tool use, and autonomous web calls. 

This is not just a new vulnerability; it is a new class of threat surface. Enterprises are already wiring assistants into HR systems, finance workflows, CRMs, and SaaS applications. With protocols like the Model Context Protocol (MCP) and Agent-to-Agent (A2A), these systems are becoming meshes of autonomous actors, capable of delegating tasks across networks of services. In such an environment, a single poisoned instruction can cascade across agents, tools, and APIs, creating chained exploits that look benign in isolation but devastating in sequence.

Defensive Implications

For security teams, ShadowLeak should trigger a rethink. AI assistants must be treated not as chat features but as privileged service accounts with direct access to sensitive systems. Defences must evolve on several fronts: 

  • Input sanitisation: strip or neutralise hidden instructions in HTML and documents before LLM ingestion
  • Agent instrumentation: log every action with who/what/why metadata to enable forensic visibility
  • Segmentation: separate “read” from “act” permissions, applying least privilege to AI tools just as you would to administrators
  • Semantic detection: move beyond regex and pattern matching—identifying malicious intent requires LLM-driven or equivalent semantic analysis
  • Red-teaming with prompt attacks: build and test zero-click IPI playbooks before greenlighting broad agent deployments

The Bigger Picture

ShadowLeak is a symptom of a larger shift: the rise of the Internet of Agents. As enterprises embrace AI autonomy, they inherit the risks of distributed, opaque, and transitive authority. Security professionals must anticipate chained compromises, lateral agent movement, and poisoned toolchains—scenarios that current monitoring is not built to catch. 

The lesson is clear. ShadowLeak was responsibly disclosed and patched. But it will not be the last. Dark-market actors are professionalising, and AI-native exploits are evolving faster than traditional defences. The organisations that stay ahead will be those that treat agents as production systems, instrument them as such, and insist on security as a first-class feature of AI adoption. 

ShadowLeak is not just a warning; it is the blueprint of the next attack landscape. The question for defenders is whether they will adapt before the next wave of agent-based exploits arrives.

Share This