Home/Methodology
The methodology
How AgentCrush ranks the agent economy
AgentCrush is the evidence-ranked index of the agent economy. We don't pick winners — we publish multi-signal evidence with transparent weights. Different agent categories leave different evidence trails, so we run four category-specific methodologies, each with its own signal sources, weights, and evidence-ready rule.
Principles
Multi-signal corroboration. No agent is evidence-ranked on a single signal. Every category requires at least 3 of N signals available, AND at least one of those signals must be a capability signal — not just popularity. Downloads and stars are vanity metrics on their own.
Per-category methodology. A model family leaves HuggingFace downloads and LMArena scores; a tokenized agent leaves on-chain liquidity and holder distribution; a service agent leaves GitHub forks and Agentverse interactions. Running one universal scoring function across all of them would average away the truth.
Methodology travels with data. Every category page publishes its full signal set, weights, formulas, evidence-ready rule, and known limitations. The same methodology is exposed via our MCP server so LLMs querying AgentCrush can correctly explain HOW a ranking was computed — not just what it is.
Honest gaps. Where a signal isn't yet populated for an agent (no LMArena coverage, no citations indexed, etc.), the methodology returns NULL — not 0. That distinction matters: NULL means "unmeasured," 0 means "measured at zero." The composite weights unmeasured signals as missing rather than failing.
Live coverage
135 total evidence-ranked agents across 4 categories.
Model Families
v1.4-with-deploymentScores model families (Hermes, Llama, Mistral, Qwen, DeepSeek, etc.) on adoption, capability, downstream usage, research impact, and cross-protocol agent-economy deployment.
Signals
Weighted basket of 5 sub-scoresLEAST(100, ROUND((MAX(arena_score) − 700) / 8))LEAST(100, ROUND(LOG10(SUM(derivatives_count)) × 25))LEAST(100, ROUND(LOG10(SUM(citation_count)) × 16))LEAST(100, ROUND(LOG10(SUM(deployment_count)) × 30))Evidence-ready rule
3 of 5 signals AND ≥1 capability signal (derivatives, LMArena, citations, or deployment).
Known limitations
- Currently 5 seeded model families (Qwen, Gemini, DeepSeek, Llama, Hermes). View covers all model_family agents; seed set is curated.
- Citation backfill depends on Semantic Scholar API; some papers may have 0 cites due to S2 indexing delay.
- Deployment signal is volume-based — high counts can indicate broad model adoption rather than specific deployment of one variant.
Tokenized Agents
v1.1-tokenized-tvlScores tokenized AI agents (Virtuals Protocol, etc.) economics-first: market cap, on-chain liquidity, holder distribution, capital locked, plus social visibility.
Signals
LEAST(100, ROUND(LOG10(market_cap_usd) × 12))liquidity_score × 0.65 + volume_score × 0.35holders_count_score × 0.55 + (100 − top10_pct) × 0.45GREATEST(0, LEAST(100, 50 + price_change_pct))LEAST(100, ROUND(LOG10(tvl_usd) × 14))socially_visible ? 100 : 0Evidence-ready rule
3 of 6 signals AND ≥1 economic signal (mc, liquidity, holders, or TVL > 0).
Known limitations
- Cross-protocol presence signal tracked but currently unweighted — agent economy hasn't penetrated cross-protocol descriptions enough yet.
- Social signal in v1.1 is binary; aixbt is the only socially-flagged agent.
- Currently covers Virtuals Protocol agents only (16 promoted). Other tokenized ecosystems not yet integrated.
Service Agents
v1.1-service-forksScores service agents (A2A protocol, Agentverse, x402, ERC-8004) on adoption, source quality, activity recency, protocol breadth, fork engagement.
Signals
GREATEST(stars_log×18, interactions_log×22)GREATEST(a2a_signal_strength, ROUND(av_rating × 20))Time-bucketed: 7d→100, 30d→80, 90d→60, 180d→40, 365d→20LEAST(100, COUNT(protocols) × 25)LEAST(100, ROUND(LOG10(forks) × 22))currently NULL (placeholder)Evidence-ready rule
3 of 6 signals AND ≥1 adoption signal (stars > 0, interactions > 0, or forks > 0).
Known limitations
- Currently sources from A2A (28 agents) + Agentverse (0 active in current scrape).
- v1.2 will add ERC-8004 registry (29K agents) and Bazaar x402 endpoints (46K) as additional service surfaces.
- Cross-protocol presence tracked in cross_protocol_presence but unweighted in v1.1 composite.
Developer Agents
v2.c-publicScores developer-tool agents (frameworks, runtimes, dev tools) on GitHub activity, package usage, dependency adoption, ecosystem links, docs, discourse, and trust signals. The universal ranking surface.
Signals
weighted by active_weight_totallog-scaled per ecosystemlog-scaled countcomposite heuristic 0-100graph-distance scorelog-scaledcomposite 0-100Evidence-ready rule
Multi-signal coverage threshold OR top-100 ranked OR single signal ≥ 90 with ≥ 2 corroborating signals > 50.
Known limitations
- Methodology weights are computed dynamically per agent (active_weight_total) rather than fixed.
- Universal ranking includes 1,289 agents; evidence_ranked subset is the public-rank list.
For machine consumers
The same methodology is exposed via our MCP server. LLMs (Claude Desktop, Cursor, custom agents) can query AgentCrush as a live data layer and explain ranking decisions accurately.
Endpoint
POST https://www.agentcrush.xyz/api/mcp/v1Discovery
GET https://www.agentcrush.xyz/.well-known/mcp.json