brcc = Claude Code + any model + no rate limits + pay-per-use USDC
The goal is to let any developer run Claude Code with any LLM model, selecting and switching models freely, paying only for what they use.
User runs: brcc start --model openai/gpt-5.4
brcc proxy (localhost:8402)
├── Receives Anthropic-format requests from Claude Code
├── Overrides model name to user's choice
├── Signs x402 payment with user's wallet
└── Forwards to blockrun.ai/api/v1/messages
Working:
brcc setup base|solana— create walletbrcc start --model <model>— start proxy + launch Claude Codebrcc models— list 50+ models with pricingbrcc balance— check USDC balance- Dual chain support (Base + Solana)
- Free model (nvidia/nemotron-ultra-253b) tested end-to-end
Limitations:
- Cannot switch models inside Claude Code — must restart with different
--model - Claude Code's
/modelonly shows Anthropic models - Auth conflict warning when user has existing claude.ai login
Let users switch between any BlockRun model inside Claude Code without restarting.
From tweakcc:
allowCustomAgentModelspatch — removes Claude Code's model name validation- Enables arbitrary model names in Claude Code's
/modelpicker - Patches Claude Code's minified JS directly
From claude-code-router:
- Dynamic
/model provider,model_namesyntax inside Claude Code - Smart routing based on task type (simple/complex/reasoning/long-context)
- Transformer chain for format conversion between providers
- Per-subagent model override via tags
brcc patch # Patch Claude Code to allow custom model names
brcc patch --undo # Restore original Claude CodeKey patches to apply:
allowCustomAgentModels— unlock arbitrary model namescontextLimitoverride — support models with different context windowssubagentModels— set different models for plan/explore/general-purpose agents
Reference: /Users/vickyfu/tmp/tweakcc/src/patches/
Borrow from ClawRouter's 14-dimension classifier:
Claude Code request → brcc proxy analyzes request →
Simple task (definitions, math) → cheapest model (DeepSeek, NVIDIA free)
Code task (editing, debugging) → code-optimized model (Claude Sonnet, GPT-5)
Reasoning task (proofs, planning) → reasoning model (o3, Grok reasoning)
Long context (large files) → large context model (GPT-5.4 1M, Gemini)
User can override with explicit /model selection.
Reference: /Users/vickyfu/Documents/blockrun-web/ClawRouter/
brcc start sets these env vars so Claude Code's built-in /model picker maps to BlockRun models:
ANTHROPIC_DEFAULT_SONNET_MODEL=anthropic/claude-sonnet-4.6
ANTHROPIC_DEFAULT_OPUS_MODEL=anthropic/claude-opus-4.6
ANTHROPIC_DEFAULT_HAIKU_MODEL=deepseek/deepseek-chat # cheap alternative
CLAUDE_CODE_SUBAGENT_MODEL=anthropic/claude-haiku-4.5User can customize in ~/.blockrun/brcc-config.json.
For non-Anthropic models, the proxy needs format conversion:
Claude Code (Anthropic format)
→ brcc proxy
→ If Anthropic model: pass through to /v1/messages
→ If OpenAI model: convert Anthropic→OpenAI format, call /v1/chat/completions
→ Convert response back to Anthropic format
→ Claude Code
BlockRun's /v1/messages endpoint already handles this server-side, so the proxy just forwards. But for edge cases (streaming, tool calling), client-side transformation may be needed.
One-line install that sets up everything.
# Install brcc + Claude Code + patch in one command
curl -fsSL https://brcc.blockrun.ai/install.sh | bashThe install script:
- Install Node.js if missing
- Install Claude Code:
curl -fsSL https://claude.ai/install.sh | bash - Install brcc:
npm install -g @blockrun/cc - Create wallet:
brcc setup base - Patch Claude Code:
brcc patch - Show wallet address + funding instructions
brcc patch --undo # Restore Claude Code
npm uninstall -g @blockrun/ccZero-config experience — brcc auto-picks the best model for each task.
brcc start --smart # Auto-route every request to optimal modelUses ClawRouter's classifier to analyze each request:
- Token count, code presence, reasoning markers, creative markers
- Routes to cheapest capable model
- User sets budget:
brcc start --smart --budget 0.01(max $0.01 per request)
Teams share a wallet and track per-developer usage.
brcc team create "My Team"
brcc team add dev@example.com
brcc team budget 100 # $100 monthly budget
brcc team usage # Per-developer breakdown| Repo | What to borrow |
|---|---|
| tweakcc | JS patching engine, allowCustomAgentModels, subagentModels, prompt customization |
| claude-code-router | Dynamic /model switching, transformer chain, smart routing by task type |
| ClawRouter | 14-dimension request classifier, cost-optimized model selection |
| OpenRouter | ANTHROPIC_DEFAULT_*_MODEL env vars, ANTHROPIC_AUTH_TOKEN pattern |
ANTHROPIC_BASE_URL # API endpoint (brcc sets to localhost proxy)
ANTHROPIC_API_KEY # API key (brcc sets dummy key)
ANTHROPIC_MODEL # Default model
ANTHROPIC_DEFAULT_OPUS_MODEL # What /model opus resolves to
ANTHROPIC_DEFAULT_SONNET_MODEL # What /model sonnet resolves to
ANTHROPIC_DEFAULT_HAIKU_MODEL # What /model haiku resolves to
CLAUDE_CODE_SUBAGENT_MODEL # Model for subagents
ANTHROPIC_CUSTOM_MODEL_OPTION # Custom model in /model picker (no validation)brcc start
│
├── Patch Claude Code (tweakcc engine)
│ └── Unlock custom model names
│
├── Start proxy (localhost:8402)
│ ├── Request classifier (ClawRouter)
│ ├── Model router (CCR pattern)
│ ├── Transformer chain (Anthropic↔OpenAI)
│ ├── x402 payment signing
│ └── Response streaming
│
└── Launch Claude Code
├── ANTHROPIC_BASE_URL → proxy
├── Model env vars → BlockRun models
└── /model picker → all 50+ models