A peer-to-peer network for sharing spare inference capacity. Think BitTorrent for intelligence. SETI@home for inference.
┌────────────────────────────────────────────────────────────────────────────┐
│ TIM MESH NETWORK │
├────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Consumer │◄───────►│ Relay │◄───────►│ Consumer │ │
│ │ Node A │ │ Node R │ │ Node B │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ │ GossipSub │ GossipSub │ │
│ │ (capacity) │ (capacity) │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Kademlia DHT (Discovery) │ │
│ │ "Find nodes offering GPT-4o capacity" │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Provider │ │ Provider │ │ Provider │ │
│ │ (Ollama) │ │(API Key) │ │ (Claude) │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ Transport: libp2p + QUIC (NAT traversal via DCUtR hole punching) │
│ │
└────────────────────────────────────────────────────────────────────────────┘
TIM routes inference requests to available nodes across a decentralized mesh:
- Local models (Ollama, LMStudio, llama.cpp)
- API keys (OpenAI, Anthropic, Google)
- Subscription headroom (Claude Max, ChatGPT Pro)
If it can answer a Completions API request, it's a valid source.
We route inference requests. That's it.
- Client owns context, orchestration, tool execution, sandboxing
- Network owns routing, discovery, reputation
- Unit of work: single request/response pair
- Stateless at the network layer
Specification phase. No implementation yet.
See specs/tim-overview.md for the full specification.
- Language: Rust
- Transport: libp2p + QUIC
- Discovery: Kademlia DHT + GossipSub
- API: OpenAI Completions API compatible
