Skip to content

Commit 3be5175

Browse files
author
1bcMax
committed
docs: add supported chains, smart routing, X/Twitter, search, image editing sections
1 parent e313b83 commit 3be5175

1 file changed

Lines changed: 139 additions & 5 deletions

File tree

README.md

Lines changed: 139 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,16 @@
55
[![npm](https://img.shields.io/npm/v/@blockrun/llm.svg)](https://www.npmjs.com/package/@blockrun/llm)
66
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
77

8-
**Networks:**
9-
- **Base Mainnet:** Chain ID 8453 - Production with real USDC
10-
- **Base Sepolia (Testnet):** Chain ID 84532 - Developer testing with testnet USDC
11-
- **Solana Mainnet** - Production with real USDC
8+
## Supported Chains
9+
10+
| Chain | Network | Payment | Status |
11+
|-------|---------|---------|--------|
12+
| **Base** | Base Mainnet (Chain ID: 8453) | USDC | Primary |
13+
| **Base Testnet** | Base Sepolia (Chain ID: 84532) | Testnet USDC | Development |
14+
| **Solana** | Solana Mainnet | USDC (SPL) | New |
15+
16+
> **XRPL (RLUSD):** Use [@blockrun/llm-xrpl](https://www.npmjs.com/package/@blockrun/llm-xrpl) for XRPL payments
1217
13-
**Payment:** USDC
1418
**Protocol:** x402 v2 (CDP Facilitator)
1519

1620
## Installation
@@ -87,6 +91,64 @@ const tweet = await client.chat('xai/grok-3-mini', 'What is trending on X?', { s
8791

8892
**Your private key never leaves your machine** - it's only used for local signing.
8993

94+
## Smart Routing (ClawRouter)
95+
96+
Let the SDK automatically pick the cheapest capable model for each request:
97+
98+
```typescript
99+
import { LLMClient } from '@blockrun/llm';
100+
101+
const client = new LLMClient();
102+
103+
// Auto-routes to cheapest capable model
104+
const result = await client.smartChat('What is 2+2?');
105+
console.log(result.response); // '4'
106+
console.log(result.model); // 'nvidia/kimi-k2.5' (cheap, fast)
107+
console.log(`Saved ${(result.routing.savings * 100).toFixed(0)}%`); // 'Saved 78%'
108+
109+
// Complex reasoning task -> routes to reasoning model
110+
const complex = await client.smartChat('Prove the Riemann hypothesis step by step');
111+
console.log(complex.model); // 'xai/grok-4-1-fast-reasoning'
112+
```
113+
114+
### Routing Profiles
115+
116+
| Profile | Description | Best For |
117+
|---------|-------------|----------|
118+
| `free` | nvidia/gpt-oss-120b only (FREE) | Testing, development |
119+
| `eco` | Cheapest models per tier (DeepSeek, xAI) | Cost-sensitive production |
120+
| `auto` | Best balance of cost/quality (default) | General use |
121+
| `premium` | Top-tier models (OpenAI, Anthropic) | Quality-critical tasks |
122+
123+
```typescript
124+
// Use premium models for complex tasks
125+
const result = await client.smartChat(
126+
'Write production-grade async TypeScript code',
127+
{ routingProfile: 'premium' }
128+
);
129+
console.log(result.model); // 'anthropic/claude-opus-4.5'
130+
```
131+
132+
### How ClawRouter Works
133+
134+
ClawRouter uses a 14-dimension rule-based classifier to analyze each request:
135+
136+
- **Token count** - Short vs long prompts
137+
- **Code presence** - Programming keywords
138+
- **Reasoning markers** - "prove", "step by step", etc.
139+
- **Technical terms** - Architecture, optimization, etc.
140+
- **Creative markers** - Story, poem, brainstorm, etc.
141+
- **Agentic patterns** - Multi-step, tool use indicators
142+
143+
The classifier runs in <1ms, 100% locally, and routes to one of four tiers:
144+
145+
| Tier | Example Tasks | Auto Profile Model |
146+
|------|---------------|-------------------|
147+
| SIMPLE | "What is 2+2?", definitions | nvidia/kimi-k2.5 |
148+
| MEDIUM | Code snippets, explanations | xai/grok-code-fast-1 |
149+
| COMPLEX | Architecture, long documents | google/gemini-3.1-pro |
150+
| REASONING | Proofs, multi-step reasoning | xai/grok-4-1-fast-reasoning |
151+
90152
## Available Models
91153

92154
### OpenAI GPT-5 Family
@@ -203,6 +265,78 @@ All models below have been tested end-to-end via the TypeScript SDK (Feb 2026):
203265

204266
*Testnet models use flat pricing (no token counting) for simplicity.*
205267

268+
## X/Twitter Data (Powered by AttentionVC)
269+
270+
Access X/Twitter user profiles, followers, and followings via [AttentionVC](https://attentionvc.ai) partner API. No API keys needed — pay-per-request via x402.
271+
272+
```typescript
273+
import { LLMClient } from '@blockrun/llm';
274+
275+
const client = new LLMClient();
276+
277+
// Look up user profiles ($0.002/user, min $0.02)
278+
const users = await client.xUserLookup(['elonmusk', 'blockaborr']);
279+
for (const user of users.users) {
280+
console.log(`@${user.userName}: ${user.followers} followers`);
281+
}
282+
283+
// Get followers ($0.05/page, ~200 accounts)
284+
let result = await client.xFollowers('blockaborr');
285+
for (const f of result.followers) {
286+
console.log(` @${f.screen_name}`);
287+
}
288+
289+
// Paginate through all followers
290+
while (result.has_next_page) {
291+
result = await client.xFollowers('blockaborr', result.next_cursor);
292+
}
293+
294+
// Get followings ($0.05/page)
295+
const followings = await client.xFollowings('blockaborr');
296+
```
297+
298+
Works on both `LLMClient` (Base) and `SolanaLLMClient`.
299+
300+
## Standalone Search
301+
302+
Search web, X/Twitter, and news without using a chat model:
303+
304+
```typescript
305+
import { LLMClient } from '@blockrun/llm';
306+
307+
const client = new LLMClient();
308+
309+
const result = await client.search('latest AI agent frameworks 2026');
310+
console.log(result.summary);
311+
for (const cite of result.citations ?? []) {
312+
console.log(` - ${cite}`);
313+
}
314+
315+
// Filter by source type and date range
316+
const filtered = await client.search('BlockRun x402', {
317+
sources: ['web', 'x'],
318+
fromDate: '2026-01-01',
319+
maxResults: 5,
320+
});
321+
```
322+
323+
## Image Editing (img2img)
324+
325+
Edit existing images with text prompts:
326+
327+
```typescript
328+
import { LLMClient } from '@blockrun/llm';
329+
330+
const client = new LLMClient();
331+
332+
const result = await client.imageEdit(
333+
'Make the sky purple and add northern lights',
334+
'data:image/png;base64,...', // base64 or URL
335+
{ model: 'openai/gpt-image-1' }
336+
);
337+
console.log(result.data[0].url);
338+
```
339+
206340
## Testnet Usage
207341

208342
For development and testing without real USDC, use the testnet:

0 commit comments

Comments
 (0)