Ofline
If you're building with the Model Context Protocol (MCP), you already know the pain.
You write a server. You wire it up to Claude, Cursor, or your own agent. And then... you spend the next 3 hours running
There had to be a better way. So I built one.
Live: mcp-hub-pi.vercel.app
GitHub: github.com/namanxdev/MCPHub
NPM Agent:
It's an open-source platform to develop, debug, and deploy MCP servers without losing your sanity. No bloat. No hand-holding. Just the tools you actually need.
MCP is genuinely the future of how LLMs interact with the world. But the developer experience? It's basically:
There's zero visibility into the wire protocol. No easy way to test individual tools. No metrics to tell you if your server is slow or just broken.
That friction kills iteration speed. And when you're building AI agents, iteration speed is everything.
🛠️ The Playground — Stop Writing
Paste your SSE endpoint or local command. MCPHub auto-generates clean input forms directly from your tool's JSON Schema.
Fill in arguments → Hit Run → See the raw response instantly.
No more hand-crafting JSON-RPC payloads. No more guessing if your schema is malformed.

Every single JSON-RPC message is captured, parsed, and displayed with syntax highlighting. Filter by direction (client → server or vice versa), inspect headers, and spot malformed tool definitions before they hit production.
It's the transparency MCP development has been missing.
Here's the catch-22: your deployed playground can't talk to

A WebSocket bridge connects your local MCP servers directly to the MCPHub web app. Green banner pops up. Toggle it on. Done.
Real P50 / P95 / P99 latency metrics. Error rate tracking. Uptime monitoring per tool. Not vanity numbers actual production signals.
Searchable directory of community MCP servers with live status badges. One-click testing. No clone-and-run required.
Because MCP itself is an open protocol. The tooling around it should be too.
I'm building this entirely in public. Break it, fork it, tell me what's missing. The roadmap is driven by real pain points, not investor decks.
If you've been wrestling with MCP servers, this is for you. If you haven't started yet this is your excuse to.
What's the most painful part of MCP development for you right now? Drop it in the comments. I might just build the fix next.
You write a server. You wire it up to Claude, Cursor, or your own agent. And then... you spend the next 3 hours running
curl commands, squinting at raw JSON-RPC payloads, and guessing why your tool schema isn't being picked up.There had to be a better way. So I built one.
Meet MCPHub — The Postman for MCP
Live: mcp-hub-pi.vercel.app
GitHub: github.com/namanxdev/MCPHub
NPM Agent:
@naman_411/mcphub-agentIt's an open-source platform to develop, debug, and deploy MCP servers without losing your sanity. No bloat. No hand-holding. Just the tools you actually need.
The Problem: MCP Debugging Is Still Stuck in 2010
MCP is genuinely the future of how LLMs interact with the world. But the developer experience? It's basically:
- Write a server
- Fire up your AI client
- Hope it works
- If it doesn't, add
console.logeverywhere and pray
There's zero visibility into the wire protocol. No easy way to test individual tools. No metrics to tell you if your server is slow or just broken.
That friction kills iteration speed. And when you're building AI agents, iteration speed is everything.
What MCPHub Actually Does
🛠️ The Playground — Stop Writing curl Commands
Paste your SSE endpoint or local command. MCPHub auto-generates clean input forms directly from your tool's JSON Schema.
Fill in arguments → Hit Run → See the raw response instantly.
No more hand-crafting JSON-RPC payloads. No more guessing if your schema is malformed.

🕵️♂️ Protocol Inspector — See Everything Over the Wire
Every single JSON-RPC message is captured, parsed, and displayed with syntax highlighting. Filter by direction (client → server or vice versa), inspect headers, and spot malformed tool definitions before they hit production.
It's the transparency MCP development has been missing.
🖥️ Desktop Agent — Your Localhost, But Cloud-Connected
Here's the catch-22: your deployed playground can't talk to
localhost. The @naman_411/mcphub-agent npm package fixes that.
Код:
npm install -g @naman_411/mcphub-agent
mcphub-agent start
A WebSocket bridge connects your local MCP servers directly to the MCPHub web app. Green banner pops up. Toggle it on. Done.
📊 Health Dashboard — Know Before Your Users Do
Real P50 / P95 / P99 latency metrics. Error rate tracking. Uptime monitoring per tool. Not vanity numbers actual production signals.
🌐 Public Registry — Discover & Test Community Servers
Searchable directory of community MCP servers with live status badges. One-click testing. No clone-and-run required.
The Stack (For The Curious)
| Layer | Tech |
|---|---|
| Framework | Next.js 16 (App Router, React 19) |
| Language | TypeScript 5 |
| Styling | Tailwind CSS 4 + shadcn/ui |
| State | Zustand 5 |
| Database | Neon PostgreSQL + Drizzle ORM |
| Auth | NextAuth.js v5 (GitHub + Google) |
| MCP SDK | @modelcontextprotocol/sdk |
| Charts | Recharts |
| Deploy | Vercel |
Why Open Source?
Because MCP itself is an open protocol. The tooling around it should be too.
I'm building this entirely in public. Break it, fork it, tell me what's missing. The roadmap is driven by real pain points, not investor decks.
Try It In 30 Seconds
If you've been wrestling with MCP servers, this is for you. If you haven't started yet this is your excuse to.
What's the most painful part of MCP development for you right now? Drop it in the comments. I might just build the fix next.