Model Context Protocol is the open standard AI agents use to call external tools. Adoption exploded in 2025 and 2026: over 150 million downloads, thousands of community-built servers, integrations in every major AI IDE. The security tooling did not keep up.
In April 2026, OX Security disclosed a systemic RCE vulnerability in the core protocol affecting every implementation. Anthropic's own reference server (mcp-server-git) had three CVEs including command injection and path traversal. Cursor, VS Code, Windsurf, Claude Code, and Gemini-CLI were all found vulnerable to prompt injection through MCP tool descriptions. One exploit (Windsurf, CVE-2026-30615) required zero user interaction.
API scanners don't understand MCP's tool discovery model. WAFs can't validate agent identity. There is no established security scanner for MCP servers.
Built a Python CLI scanner that connects to any MCP server, discovers its tools via the JSON-RPC protocol, and runs modular security checks against them. Each check probes a specific vulnerability category and produces severity-ranked findings with remediation guidance.
The scanner tests for: unauthenticated access, weak credentials, overly permissive input schemas, dangerous parameter types (raw shell commands, file paths, SQL), tool description poisoning (hidden instructions aimed at AI agents), response injection, secrets leaking in tool metadata, verbose error messages exposing internals, missing TLS, wildcard CORS, and absent rate limiting.
Ships with a deliberately vulnerable MCP server (runs in Docker) that has labeled security flaws mapped to real CVEs. Same concept as DVWA or OWASP Juice Shop, but for the MCP protocol. The vulnerable server exposes tools with command injection, path traversal, SQL injection, secrets in metadata, and a poisoned tool description that attempts to trick AI agents into exfiltrating data.
The project includes a deliberately insecure MCP server (Docker) with labeled vulnerabilities mapped to real CVEs. Same pattern as DVWA or Juice Shop. The server exposes six tools, each with intentional flaws:
run_command accepts raw shell input (CVE-2025-68143).read_file has no path validation (CVE-2025-68145).query_database takes unsanitized SQL.get_user_info leaks API keys in responses.internal_tool has a poisoned description with hidden instructions
targeting AI agents (CVE-2026-30615).echo reflects input for response injection testing.Complete and functional. Source on GitHub.