I built this after watching my Claude Code setup burn ~60,000 tokens just loading tool definitions from four MCP servers — before I even hit a type prompt. Problem: MCP gives agents access to hundreds of tools, but each tool’s description consumes context. Redis published data showing the same thing—167 tools, 42% pick accuracy, 60K tokens per session overhead. In production, it can reach 150K+ tokens. The agent spends more time deciding what to use than actually solving your problem. Current solutions fall into two buckets: – Manual whitelist (mcpwrapped, MCP funnel): You have to know in advance which tools to hide. With 100+ tools on multiple servers, this is a part-time job. – Commercial platforms (Stacklok, Redis): excellent accuracy (Stacklok reached 94%), but requires closed source or Redis infrastructure. I wanted something that: 1. Works out of the box with my existing claude_desktop_config.json 2. What tools I need based on what I’m actually trying to do 3. Runs 100% natively, no API keys, no telemetry 4. Open source and dead simple that’s easy to contribute to so I made p`0yt lines about P`0yt. It collects tools from all your MCP servers, creates a local embedding index (all-MiniLM-L6-v2, ~80MB), and filters tools by intent before the agent sees them. In my testing the token reduction is ~98% — as reported by Redis and Atlassian with comparable approaches. This is v0.1.0, so there’s a lot left to build (dynamic re-filtering per message is next). I’d love feedback from anyone else hitting the MCP tool sprawl — what’s your current solution? Regex filters? Special Agent? Just eating token value? Thanks for checking it out. I’m happy to answer any questions in the comments.
Zero-config MCP proxy that hides 99% of tools shutdown-mcp
3