Developers/MCP Server

MCP Server · v1

AgentCrush MCP Server

Connect AgentCrush as a live data layer in any MCP-compatible LLM client (Claude Desktop, Cursor, custom agents). 7 read-only tools spanning the 4 category indices: model families, tokenized agents, service agents, developer agents.

Protocol

MCP 2024-11-05

Transport

HTTP POST

Auth

None

Rate limit

60/min · IP

Endpoint

POST https://www.agentcrush.xyz/api/mcp/v1

Discovery manifest: https://www.agentcrush.xyz/.well-known/mcp.json

Connect to Claude Desktop

Add this to your claude_desktop_config.json (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json).

{
  "mcpServers": {
    "agentcrush": {
      "url": "https://www.agentcrush.xyz/api/mcp/v1"
    }
  }
}

Restart Claude Desktop. The 7 AgentCrush tools appear in the available tool list. Same config format works in Cursor and other MCP clients.

Tools (7)

search_agents

Search AI agents by name or keyword. Returns matching agents with category, tier, and rank info. Use the structured filters object for constraints — future versions can add filter keys without breaking the API.

Example arguments

{ "query": "qwen", "filters": { "primary_category": "model_family", "evidence_ranked_only": true, "limit": 10 } }
get_agent_details

Full agent details including scores across ALL categories the agent qualifies for. Joins all 4 scoring views. Returns identity, raw signals, sub-scores, evidence-ready status.

Example arguments

{ "handle": "qwen" }
get_agent_history

Daily rank + score snapshots over the past 1–90 days, with trend summary. Useful for showing how an agent's standing has evolved.

Example arguments

{ "handle": "crewai", "days": 30 }
compare_agents

Side-by-side comparison of 2–5 agents across all their categories. Returns full per-agent scoring breakdowns.

Example arguments

{ "handles": ["qwen", "gemini", "llama"] }
list_categories

The 4 AgentCrush category indices with tracked + evidence-ranked counts and methodology versions. Discover what kinds of agents AgentCrush tracks.

Example arguments

{}
get_category_ranking

Full ranking for a specific category. Returns agents ordered by composite score with all sub-scores visible. Defaults to evidence-ranked only.

Example arguments

{ "category": "model_family", "evidence_ready_only": true, "limit": 50 }
get_methodology

Scoring methodology for a category — weights, signal sources, formulas, evidence-ready rule, AND known limitations. Methodology travels with data so LLMs can answer "how does this ranking work?" accurately.

Example arguments

{ "category": "tokenized" }

Categories

The category argument accepts one of:

model_familytokenizedservicedeveloper

Each category has its own methodology version (model_family v1.4-with-deployment, tokenized v1.1-tokenized-tvl, service v1.1-service-forks, developer v2.c-public). Call get_methodology(category) to retrieve weights, signals, evidence-ready rule, and limitations.

curl examples

List all tools (introspection)

curl -s https://www.agentcrush.xyz/api/mcp/v1 \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' \
  -X POST

Get full agent details (cross-category scores)

curl -s https://www.agentcrush.xyz/api/mcp/v1 \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
      "name": "get_agent_details",
      "arguments": { "handle": "qwen" }
    }
  }'

Get methodology for a category (weights + limitations)

curl -s https://www.agentcrush.xyz/api/mcp/v1 \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
      "name": "get_methodology",
      "arguments": { "category": "model_family" }
    }
  }'

Behaviors

Rate limits

60 requests per minute per IP. Every response includes:

Need a higher rate limit for a production agent? Email contact@agentcrush.xyz.

Methodology travels with data

When an LLM uses AgentCrush data and a user asks "how does this ranking work?", the LLM can call get_methodology(category) and answer accurately — weights, signal sources, evidence-ready rule, known limitations. We document our methodology because if it's not auditable, it's not a methodology.

Full methodology hub →
All developer docs →Methodology →Rankings →