TokenMix Research Lab · 2026-04-24
Claude Code Router: Configuration + Troubleshooting 2026
Claude Code Router is a community-maintained proxy tool that lets Claude Code (the terminal AI agent) talk to any OpenAI-compatible LLM — not just Anthropic's. 2,900+ GitHub stars, installs via npm, works with GPT-5.4, Gemini 3.1 Pro, GLM-5.1, DeepSeek V3.2, Qwen3-Max, and any model hosted on TokenMix.ai. Typical use case: cap Claude Code's cost ceiling by routing non-reasoning queries to cheaper models, keeping Opus 4.7 only for the hard tasks. This guide covers config file structure, the 5 most common errors ("claude router connection refused", "model not found", "authentication failed"), and which provider mixes deliver the best cost-quality ratio. Verified as of April 24, 2026.
Table of Contents
- Confirmed vs Speculation
- What Claude Code Router Actually Does
- Installation in 3 Commands
- Config File: The 5 Provider Blocks You Need
- Common Error 1-5 with Fixes
- Recommended Provider Mixes
- FAQ
Confirmed vs Speculation
| Claim | Status | Source |
|---|---|---|
| Claude Code Router exists as npm package | Confirmed | GitHub musistudio/claude-code-router |
| Works with OpenAI-compatible providers | Confirmed | Repo docs |
| Routes Claude Code terminal agent | Confirmed | Core feature |
| Official Anthropic support | No — community project | — |
| Stable enough for production | Partial — core routing stable, edge cases bugs | Issue tracker |
| Replaces paid Claude Code subscription | Yes when BYOK (bring your own keys) | |
| Supports streaming | Yes | v2.x+ |
| Supports tool use / function calling | Yes | v2.3+ |
Snapshot note (2026-04-24): Provider model names in the config examples below reflect TokenMix.ai's routing slugs for currently-shipped models (Claude Opus 4.7, Sonnet 4.6, Haiku 4.5, GPT-5.4, Gemini 3.1 Pro). GPT-5.5 and DeepSeek V4 launched April 23 — update slugs if you want to route to the newer entries. Pricing claims in the "Recommended Provider Mixes" table are estimates; verify via your aggregator's live pricing page.
What Claude Code Router Actually Does
Claude Code natively only talks to Anthropic's API. The router intercepts that traffic and:
- Accepts requests in Claude Code's format
- Translates them to the target provider's schema
- Routes based on your config (model name, keyword triggers, cost tier)
- Returns responses in Claude-compatible format
Why developers want this:
- Cost control: route
/docand/explainto GPT-5.4-mini ($0.20/MTok); route/refactorto Opus 4.7 ($5/MTok) - Provider redundancy: fallback from Anthropic rate limits to GLM-5.1 or GPT-5.4
- Specialized model routing: send coding to Qwen3-Coder-Plus, send reasoning to DeepSeek R1
- Compliance: route sensitive data through on-prem or EU-hosted endpoints
Installation in 3 Commands
# 1. Install globally via npm
npm install -g @musistudio/claude-code-router
# 2. Initialize config
ccr init
# 3. Start router (daemon on port 3456)
ccr start
Then configure Claude Code to point to the router:
export ANTHROPIC_BASE_URL="http://localhost:3456"
export ANTHROPIC_API_KEY="any-string-router-ignores-this"
# Now Claude Code uses the router
claude
Config File: The 5 Provider Blocks You Need
Config lives at ~/.claude-code-router/config.json. Minimal production-grade example:
{
"providers": [
{
"name": "tokenmix",
"api_base_url": "https://api.tokenmix.ai/v1",
"api_key": "$TOKENMIX_KEY",
"models": [
"anthropic/claude-opus-4-7",
"anthropic/claude-sonnet-4-6",
"anthropic/claude-haiku-4-5",
"openai/gpt-5-4",
"openai/gpt-5-4-mini",
"z-ai/glm-5.1",
"deepseek/deepseek-v3.2"
]
}
],
"router": {
"default": "tokenmix,anthropic/claude-opus-4-7",
"background": "tokenmix,openai/gpt-5-4-mini",
"think": "tokenmix,deepseek/deepseek-reasoner",
"longContext": "tokenmix,google/gemini-3.1-pro",
"webSearch": "tokenmix,openai/gpt-5-4"
}
}
The router keys correspond to Claude Code's internal task classifications:
default— regular interactive queriesbackground— auto-titling, summarization (use cheap model)think— extended reasoning modelongContext— >50K token promptswebSearch— web tool calls
Routing through TokenMix.ai lets you access 300+ models via one provider block instead of configuring each provider separately.
Common Error 1-5 with Fixes
Error 1: Claude Router Connection Refused
Symptom: claude command fails with "connection refused on port 3456".
Fix: router daemon not running. Start with ccr start. Check port isn't occupied: lsof -i :3456.
Error 2: Model not found in provider
Symptom: requests fail with "model 'claude-opus-4-7' not found".
Fix: model name in Claude Code doesn't match provider's exact name. Use anthropic/claude-opus-4-7 not just claude-opus-4-7. Check provider's model list: curl https://api.tokenmix.ai/v1/models | jq '.data[].id'.
Error 3: Authentication Failed
Symptom: 401 response from upstream.
Fix: $TOKENMIX_KEY not exported. Run export TOKENMIX_KEY="sk-..." before ccr start. Key rotation: check dashboard for the new key.
Error 4: Unsupported tool_use format
Symptom: tool/function calling requests fail on non-Anthropic providers.
Fix: some providers have different tool schemas. Claude Code Router v2.3+ handles translation; ensure you're on latest: npm update -g @musistudio/claude-code-router.
Error 5: Streaming buffer desync
Symptom: responses appear in bursts instead of smooth streaming.
Fix: specific to a handful of providers. Disable streaming for those: add "stream": false under specific router entry.
Recommended Provider Mixes
Tested configurations from community:
| Use case | Default | Background | Think | Long context | Monthly cost (100M tokens) |
|---|---|---|---|---|---|
| Cost-optimized | GPT-5.4-mini | GPT-5.4-mini | GPT-OSS-120B | Gemini 3.1 Flash | ~ |