Lasso is a smart proxy/router that turns your node infrastructure and RPC providers into a fast, observable, configurable, and resilient multi-chain JSON-RPC layer.
It proxies Ethereum JSON-RPC over HTTP + WebSocket and gives you a single RPC API with expressive routing control (strategies, provider overrides, and profiles).
Route every request to the best available provider to handle that request, while configuring providers to match your application's needs. Leverage deep redundancy, expressive routing, and built-in observability to improve UX while keeping your application code simple.
🐛 Docs 🐛 Report Bug 💡 Request Feature
- Why Lasso
- Features
- Endpoints
- Quick Start
- Try It
- Configuration
- How it works
- Built with Elixir/OTP
- Documentation
- Contributing
- Security
- Troubleshooting
- License
Choosing a single RPC provider has real UX and reliability consequences, but the tradeoffs (latency, uptime, quotas, features, cost, rate limits) are opaque and shift over time. Performance varies by region, method, and hour, and API inconsistencies make a "one URL" setup brittle.
Lasso makes the RPC layer programmable and resilient. It's designed to run as a geo-distributed proxy where RPC requests are routed to the closest Lasso node. Each node independently measures real latencies and health, routing each call to the best provider for that region, chain, method, and transport. You get redundancy without brittle application code, and you can scale throughput by adding providers instead of replatforming.
For multi-region deployments, Lasso nodes form a cluster that aggregates observability data across all regions—giving you unified visibility into provider performance and health without adding latency to the routing hot path.
Different providers excel at different workloads (hot reads vs archival queries vs subscriptions). Lasso lets you express those preferences and enforce them automatically.
- Multi-provider, multi-chain Ethereum JSON-RPC proxy for HTTP + WebSocket
- Routing strategies:
fastest,load-balanced,latency-weighted, plus provider override routes - Method-aware benchmarking: latency tracked per provider × method × transport
- Resilience: circuit breakers, retries, and transport-aware failover
- WebSocket subscriptions: multiplexing with optional gap-filling via HTTP on upstream failure
- Profiles: isolated configs/state/metrics (dev/staging/prod, multi-tenant, experiments)
- Cluster aggregation: optional BEAM clustering aggregates metrics across geo-distributed nodes with regional drill-down
- LiveView dashboard: provider status, routing decisions, latency metrics, and cluster-wide observability
HTTP (POST):
/rpc/:chain(default strategy)/rpc/fastest/:chain/rpc/load-balanced/:chain/rpc/latency-weighted/:chain/rpc/provider/:provider_id/:chain(provider override)
WebSocket:
/ws/rpc/:chain/ws/rpc/:strategy/:chain/ws/rpc/provider/:provider_id/:chain
Profiles (namespaced routing configs):
- HTTP:
/rpc/profile/:profile/... - WS:
/ws/rpc/profile/:profile/...
- Elixir: 1.17+ (check with
elixir --version) - Erlang/OTP: 26+ (check with
erl -version) - Node.js: 18+ (for asset compilation)
# Clone the repository
git clone https://github.com/jaxernst/lasso-rpc
cd lasso-rpc
# Install dependencies
mix deps.get
# Start the Phoenix server
mix phx.serverThe application will be available at http://localhost:4000 and the dashboard at http://localhost:4000/dashboard.
Note: The default profile includes free public providers (no API keys required), so you can start using it immediately.
# Run with Docker
./run-docker.shThe application will be available at http://localhost:4000.
For production deployments, see the Dockerfile for customization options.
For geo-distributed deployments with aggregated observability:
# Node 1 (us-east)
export LASSO_NODE_ID=us-east
export CLUSTER_DNS_QUERY="lasso.internal"
mix phx.server
# Node 2 (eu-west)
export LASSO_NODE_ID=eu-west
export CLUSTER_DNS_QUERY="lasso.internal"
mix phx.serverThis enables:
- Nodes discover each other via DNS
- Dashboard aggregates metrics across all lasso nodes (multi-region)
- Drill down by node/region to compare provider performance
- Each node makes independent routing decisions based on local latency
Note: Clustering is optional. A single node works great standalone.
curl -sS -X POST http://localhost:4000/rpc/ethereum \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'WebSocket subscription:
wscat -c ws://localhost:4000/ws/rpc/ethereum
> {"jsonrpc":"2.0","method":"eth_subscribe","params":["newHeads"],"id":1}Return routing metadata with your request:
curl -sS -X POST 'http://localhost:4000/rpc/ethereum?include_meta=headers' \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' -i | sed -n '1,15p'Profiles live in config/profiles/*.yml. Each profile defines chains, providers, routing policy, and limits. ${ENV_VAR} substitution is supported (and unresolved placeholders will fail startup).
For the full configuration reference (all supported options + tuning notes), see config/profiles/default.yml.
The included default.yml profile is configured with free public providers (no API keys required). Start mix phx.server and you have a working multi-provider RPC proxy.
Good for:
- Getting started without setting up API keys
- Local development with instant redundancy
- Production fallback when combined with your own nodes
- Testing Lasso's routing before configuring custom providers
Minimal example:
# config/profiles/default.yml
---
name: "Default"
slug: "default"
type: "standard"
default_rps_limit: 100
default_burst_limit: 500
---
chains:
ethereum:
chain_id: 1
providers:
- id: "ethereum_llamarpc"
name: "LlamaRPC"
url: "https://eth.llamarpc.com"
ws_url: "wss://eth.llamarpc.com"
- id: "ethereum_publicnode"
name: "PublicNode Ethereum"
url: "https://ethereum-rpc.publicnode.com"
ws_url: "wss://ethereum-rpc.publicnode.com"Multiple profiles:
# config/profiles/production.yml
name: "Production"
slug: "production"
default_rps_limit: 1000
default_burst_limit: 5000
chains:
ethereum:
providers:
- id: "your_erigon"
url: "http://your-erigon-node:8545"
priority: 1
- id: "alchemy_fallback"
url: "https://..."
priority: 2Access it via:
POST /rpc/profile/production/ethereumws://localhost:4000/ws/rpc/profile/production/ethereum
Request pipeline:
- Build candidates (method + transport aware)
- Filter unhealthy channels (breakers / health)
- Execute request
- On failure: retry/failover
- Record benchmarking + telemetry
For deeper implementation details (supervision tree, BenchmarkStore, capabilities system, streaming internals), start with:
Lasso runs on the BEAM (Erlang VM) to take advantage of its strengths for high-concurrency, failure-prone networking systems.
- Massive concurrency: lightweight processes and message passing make it natural to model per-request, per-provider, and per-connection workflows without shared-memory complexity.
- Fault isolation + self-healing: OTP supervision trees keep failures contained and allow fast restarts, which is ideal when upstream providers are flaky or rate-limited.
- Distributed by design: the runtime supports clustering and remote messaging, making it straightforward to scale Lasso horizontally and keep components decoupled.
- Fast in-memory state: ETS provides efficient shared state for hot-path lookups (routing decisions, benchmarks, breaker state) without turning every read into a bottleneck.
- CONFIGURATION.md - Profile YAML reference, strategies, provider capabilities
- API_REFERENCE.md - HTTP/WebSocket endpoints, headers, errors
- DEPLOYMENT.md - Production deployment, clustering, env vars
- ARCHITECTURE.md - System design + components
- OBSERVABILITY.md - Logging/metrics
- RPC_STANDARDS.md - RPC compliance details
- TESTING.md - Dev workflow
- FUTURE_FEATURES.md - Roadmap
- CHANGELOG.md - Version history
Contributions welcome! Please see Contributing Guide for details on:
- Development setup
- Code style and quality standards
- Testing requirements
- Pull request process
Before contributing, please:
- Read CONTRIBUTING.md
- Check existing issues and pull requests
- For major changes, open an issue first to discuss your approach
For security concerns, please review our Security Policy.
- self-host it freely to make your RPCs better
- modify it to make Lasso better
⚠️ If you run a modified version as a service, you must publish your modifications
See LICENSE.md for full terms.
Built by jaxer.eth
