New top story on Hacker News: Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP

Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP
3 by knowsuchagency | 0 comments on Hacker News.
Every MCP server injects its full tool schemas into context on every turn — 30 tools costs ~3,600 tokens/turn whether the model uses them or not. Over 25 turns with 120 tools, that's 362,000 tokens just for schemas. mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand: mcp2cli --mcp https://ift.tt/0uGjm1M --list # ~16 tokens/tool mcp2cli --mcp https://ift.tt/0uGjm1M create-task --help # ~120 tokens, once mcp2cli --mcp https://ift.tt/0uGjm1M create-task --title "Fix bug" No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface. Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns. It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli` Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub. https://ift.tt/kRCvey3

Post a Comment

0 Comments