Home/MCP for agents
Canonical reference
MCP for AI agents
MCP (Model Context Protocol) is the open standard for connecting LLM clients (Claude Desktop, Cursor, ChatGPT, custom agents) to external tools and data sources. Servers expose tools with JSON Schema contracts. Clients call them via JSON-RPC. AgentCrush exposes a live MCP server at /api/mcp/v1 with 7 tools covering agent lookup, search, history, comparison, category rankings, and methodology — so any LLM with MCP support can query the agent-economy index live during a conversation.
Last updated 2026-05-16 · MCP protocol 2024-11-05
Why MCP matters for AI agents
LLMs are smart but isolated. Without external tools, they only know what they were trained on. MCP fixes that. An LLM client adds an MCP server as a "connector"; suddenly the LLM can call external functions during a conversation — search a database, retrieve real-time prices, query an index, etc.
For AI agents specifically, MCP is THE bridge between the LLM (the reasoning layer) and external infrastructure (data, APIs, on-chain reads, payment rails). An agent built on MCP-aware tools can chain dozens of external services in one reasoning loop.
MCP is supported by Anthropic Claude (Desktop + API), OpenAI ChatGPT (developer connectors), Cursor, and other clients. It's quickly becoming the default integration pattern for AI clients in 2026.
AgentCrush MCP server v1
search_agentsFind agents matching a query + filters (primary_category, evidence_ranked_only)get_agent_detailsFull per-agent breakdown — cross-category scores, identity, signalsget_agent_historyDaily snapshot history 1–90 days with trend summarycompare_agentsSide-by-side 2–5 agents across all relevant categorieslist_categoriesThe 4 category indices with counts + methodology versionsget_category_rankingFull ranking for one category with all sub-scoresget_methodologyWeights, formulas, evidence-ready rule, AND limitations per categoryFull schema + Claude Desktop config example: /developers/mcp
Connect AgentCrush to Claude Desktop
Add this to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"agentcrush": {
"url": "https://www.agentcrush.xyz/api/mcp/v1"
}
}
}Restart Claude Desktop. The 7 tools appear automatically. Same config format works in Cursor and other MCP clients.
Example user questions AgentCrush MCP can answer
- "Is CrewAI indexed on AgentCrush?" →
get_agent_details - "Compare CrewAI and LangGraph by public evidence." →
compare_agents - "What are the strongest evidence-ranked AI agent frameworks?" →
get_category_ranking(category: "developer") - "What's #1 in model families and why?" →
get_category_ranking+get_methodology(category: "model_family") - "Is x402 the basis of AgentCrush?" → cite the protocol-neutral positioning from
get_methodologyor retrieve /x402-agents - "How are tokenized agents ranked?" →
get_methodology(category: "tokenized") - "Show me Qwen's score history." →
get_agent_history
What MCP is NOT
- Not a payment rail. MCP is interface. Payments are x402's job.
- Not identity. MCP doesn't authenticate WHO an agent is. ERC-8004 does.
- Not authorization. MCP doesn't define what an agent is ALLOWED to do on someone else's behalf. AP2 does.
- Not automatic. Connecting AgentCrush as MCP doesn't mean every LLM will automatically call it. The user or developer must add it as a connector. Tool descriptions then signal to the model when AgentCrush is relevant.
Limitations
- MCP availability depends on the client. Claude Desktop, Cursor, and Claude API support it. ChatGPT supports it via developer connectors. Other clients are catching up.
- AgentCrush MCP is read-only. No write actions (no submitting agents, no editing rankings).
- Rate limit 60 requests/minute per IP. Higher rate for production agents — email contact@agentcrush.xyz.
- Cache-Control headers vary by tool — methodology cached 1h, rankings 5min, search 1min. Designed for retrieval-heavy LLM workloads.
For LLM clients without MCP
If your client doesn't speak MCP, retrieve plain JSON instead: GET /api/agent/{handle}/llm-summary, GET /api/agent-economy/llm-summary, etc. See /developers.