Tenable Research Shows How “Prompt-Injection-Style” Hacks Can Secure the Model Context Protocol (MCP)
Tenable Research has published new findings that flip the script on one of the most discussed AI attack vectors. In the blog “MCP Prompt Injection: Not Just for Evil,” Tenable’s Ben Smith demonstrates how techniques resembling prompt injection can be repurposed to audit, log and even firewall Large Language Model (LLM) tool calls running over the rapidly adopted Model Context Protocol (MCP).
Posted: Monday, May 05
  • KBI.Media
  • $
  • Tenable Research Shows How “Prompt-Injection-Style” Hacks Can Secure the Model Context Protocol (MCP)
Tenable Research Shows How “Prompt-Injection-Style” Hacks Can Secure the Model Context Protocol (MCP)

Tenable Research has published new findings that flip the script on one of the most discussed AI attack vectors. In the blog “MCP Prompt Injection: Not Just for Evil,” Tenable’s Ben Smith demonstrates how techniques resembling prompt injection can be repurposed to audit, log and even firewall Large Language Model (LLM) tool calls running over the rapidly adopted Model Context Protocol (MCP).

The Model Context Protocol (MCP) is a new standard from Anthropic that lets AI chatbots plug into external tools and get real work done independently, so adoption has skyrocketed. That convenience, however, introduces fresh security risks: attackers can slip hidden instructions—a trick called “prompt injection”—or sneak in booby-trapped tools and other “rug-pull” scams to make the AI break its own rules. Tenable’s research breaks down these dangers in plain language and shows how the very same techniques can also be flipped into useful defences that log, inspect and control every tool an AI tries to run.

Why Is This Important to Know?

As enterprises rush to connect LLMs with business-critical tools, understanding both the risks and defensive opportunities in MCP is essential for CISOs, AI engineers and security researchers.

“MCP is a rapidly evolving and immature technology that’s reshaping how we interact with AI,” said Ben Smith, senior staff research engineer at Tenable. “MCP tools are easy to develop and plentiful, but they do not embody the principles of security by design and should be handled with care. So, while these new techniques are useful for building powerful tools, those same methods can be repurposed for nefarious means. Don’t throw caution to the wind; instead, treat MCP servers as an extension of your attack surface.”

Key Research Highlights

  • Cross-model behaviour varies –
    • Claude Sonnet 3.7 and Gemini 2.5 Pro Experimental reliably invoked the logger and exposed slices of the system prompt.
    • GPT-4o also inserted the logger but produced different (sometimes hallucinated) parameter values on each run.
  • Security upside: The same mechanism an attacker might exploit can help defenders audit toolchains, detect malicious or unknown tools, and build guardrails inside MCP hosts.
  • Explicit user approval: MCP already requires explicit user approval before any tool executes; this research underscores the need for strict least-privilege defaults and thorough individual tool review and tool testing.

The full research can be found here.

Share This