Developers/MCP Server
MCP Server · v1
AgentCrush MCP Server
Connect AgentCrush as a live data layer in any MCP-compatible LLM client (Claude Desktop, Cursor, custom agents). 7 read-only tools spanning the 4 category indices: model families, tokenized agents, service agents, developer agents.
Protocol
MCP 2024-11-05
Transport
HTTP POST
Auth
None
Rate limit
60/min · IP
Endpoint
Discovery manifest: https://www.agentcrush.xyz/.well-known/mcp.json
Connect to Claude Desktop
Add this to your claude_desktop_config.json (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json).
{
"mcpServers": {
"agentcrush": {
"url": "https://www.agentcrush.xyz/api/mcp/v1"
}
}
}Restart Claude Desktop. The 7 AgentCrush tools appear in the available tool list. Same config format works in Cursor and other MCP clients.
Tools (7)
search_agentsSearch AI agents by name or keyword. Returns matching agents with category, tier, and rank info. Use the structured filters object for constraints — future versions can add filter keys without breaking the API.
Example arguments
{ "query": "qwen", "filters": { "primary_category": "model_family", "evidence_ranked_only": true, "limit": 10 } }get_agent_detailsFull agent details including scores across ALL categories the agent qualifies for. Joins all 4 scoring views. Returns identity, raw signals, sub-scores, evidence-ready status.
Example arguments
{ "handle": "qwen" }get_agent_historyDaily rank + score snapshots over the past 1–90 days, with trend summary. Useful for showing how an agent's standing has evolved.
Example arguments
{ "handle": "crewai", "days": 30 }compare_agentsSide-by-side comparison of 2–5 agents across all their categories. Returns full per-agent scoring breakdowns.
Example arguments
{ "handles": ["qwen", "gemini", "llama"] }list_categoriesThe 4 AgentCrush category indices with tracked + evidence-ranked counts and methodology versions. Discover what kinds of agents AgentCrush tracks.
Example arguments
{}get_category_rankingFull ranking for a specific category. Returns agents ordered by composite score with all sub-scores visible. Defaults to evidence-ranked only.
Example arguments
{ "category": "model_family", "evidence_ready_only": true, "limit": 50 }get_methodologyScoring methodology for a category — weights, signal sources, formulas, evidence-ready rule, AND known limitations. Methodology travels with data so LLMs can answer "how does this ranking work?" accurately.
Example arguments
{ "category": "tokenized" }Categories
The category argument accepts one of:
model_familytokenizedservicedeveloperEach category has its own methodology version (model_family v1.4-with-deployment, tokenized v1.1-tokenized-tvl, service v1.1-service-forks, developer v2.c-public). Call get_methodology(category) to retrieve weights, signals, evidence-ready rule, and limitations.
curl examples
List all tools (introspection)
curl -s https://www.agentcrush.xyz/api/mcp/v1 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' \
-X POSTGet full agent details (cross-category scores)
curl -s https://www.agentcrush.xyz/api/mcp/v1 \
-H "Content-Type: application/json" \
-X POST \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_agent_details",
"arguments": { "handle": "qwen" }
}
}'Get methodology for a category (weights + limitations)
curl -s https://www.agentcrush.xyz/api/mcp/v1 \
-H "Content-Type: application/json" \
-X POST \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_methodology",
"arguments": { "category": "model_family" }
}
}'Behaviors
- Fuzzy-match on not-found. If
get_agent_details(handle: "qwn")can't find the agent, the response includes asuggestionsarray of similar handles. LLMs use this to avoid hallucinating "agent doesn't exist." - Cache-Control headers. Methodology endpoints cache 1h; ranking endpoints 5min; history 3min; search 1min. Stale-while-revalidate respected.
- Standard JSON-RPC error codes. -32700 parse, -32600 invalid request, -32601 unknown method, -32603 internal, -32029 rate-limit exceeded.
- Structured filters.
search_agentstakesfilters: {...}as an object — new filter keys (date ranges, score thresholds) can be added without breaking existing callers. - v0 still alive. Legacy endpoint
/api/mcp(4 tools, no category awareness) remains live for backward compatibility. Migrate to/api/mcp/v1for the full 7-tool surface.
Rate limits
60 requests per minute per IP. Every response includes:
- X-RateLimit-Limit — always 60
- X-RateLimit-Remaining — remaining requests this window
- X-RateLimit-Reset — seconds until window resets
Need a higher rate limit for a production agent? Email contact@agentcrush.xyz.
Methodology travels with data
When an LLM uses AgentCrush data and a user asks "how does this ranking work?", the LLM can call get_methodology(category) and answer accurately — weights, signal sources, evidence-ready rule, known limitations. We document our methodology because if it's not auditable, it's not a methodology.