A high-performance data access gateway that unifies heterogeneous backends behind a single API
Inspired by Netflix's Data Gateway, built for extreme performance and developer experience.
graph TB
APP[Your Application<br/>Python/Go/Rust] --> PROXY[Prism Gateway<br/>Unified gRPC API]
PROXY --> BACKENDS[Pluggable Backends<br/>Kafka • NATS • Redis • PostgreSQL • SQLite]
click PROXY "https://jrepp.github.io/prism-data-layer/docs/intro" "Introduction"
click BACKENDS "https://jrepp.github.io/prism-data-layer/adr" "Architecture Decisions"
style PROXY fill:#f96,stroke:#333,stroke-width:2px
style BACKENDS fill:#9f6,stroke:#333,stroke-width:2px
One API. Any backend. Zero application changes.
📖 Documentation • User Guide • Introduction • ADRs • RFCs
Rust vs Netflix's Java/Spring Boot: Delivers 16x better P50 latency (5ms → 0.3ms), 10x throughput (20k → 200k RPS), and 25x less memory (500MB → 20MB). No GC pauses means predictable tail latency for production workloads.
Client-originated configuration: Declare capacity needs in protobuf annotations; Prism auto-provisions backends. Reduces provisioning from days/weeks to minutes while eliminating coordination bottlenecks between app and infra teams.
Local-first testing: Run Kafka, PostgreSQL, NATS locally in Docker. Same test suite runs on laptop and CI. Fast feedback loop (sub-second tests) without cloud dependencies or expensive mock maintenance.
Start the local stack and interact with KeyValue pattern:
# Start Prism with Redis, PostgreSQL, NATS backends
task test:infra-up
# Set a value using grpcurl
grpcurl -plaintext \
-d '{"namespace": "demo", "key": "greeting", "value": "Hello Prism"}' \
localhost:50051 prism.v1.KeyValueService/Set
# Get the value back
grpcurl -plaintext \
-d '{"namespace": "demo", "key": "greeting"}' \
localhost:50051 prism.v1.KeyValueService/GetOutput:
{
"value": "Hello Prism",
"version": "1",
"metadata": {
"backend": "redis",
"latency_ms": 2
}
}Stop infrastructure: task test:infra-down
See BUILDING.md for complete setup and development workflow.
Pattern acceptance tests run as matrix jobs in the CI workflow:
- KeyValue: MemStore, Redis, PostgreSQL backends
- Producer: NATS (stateless/stateful variants)
- Consumer: NATS (stateless/stateful/DLQ variants)
- ClaimCheck: NATS + MinIO integration
- Unified: Producer/Consumer integration tests
Note: GitHub Actions badges query overall workflow status. See latest CI runs for individual pattern test results.
Prism sits between applications and data backends, providing:
- Unified API: Single gRPC/HTTP interface to multiple backends (Kafka, NATS, Redis, PostgreSQL, SQLite)
- Client Configuration: Declare data access patterns; Prism handles provisioning and optimization
- Zero-Downtime Migrations: Shadow traffic and declarative configuration enable seamless backend changes
- Built-in Observability: Structured logging, metrics, and distributed tracing out of the box
- Type Safety: Protobuf-first design with code generation for all components
Prism takes Netflix's proven architecture and enhances it for modern cloud-native environments:
| Aspect | Netflix Data Gateway | Prism | Impact |
|---|---|---|---|
| Proxy Runtime | Java/Spring Boot | Rust | 16x better latency, no GC pauses |
| Provisioning | Manual with infra team | Client-originated config | Days → minutes |
| Testing | Cloud-based integration tests | Local Docker + real backends | Instant feedback, zero cloud cost |
| Plugin System | Java plugins | Go plugins with WebAssembly roadmap | Language flexibility |
| Configuration | Declarative YAML | Protobuf annotations | Type-safe, code-generated |
Netflix proved the architecture at scale. Prism makes it accessible to everyone.
graph TB
subgraph "Client Applications"
APP1[App 1<br/>Python Client]
APP2[App 2<br/>Go Client]
APP3[App 3<br/>Rust Client]
end
subgraph "Prism Gateway (Rust)"
PROXY[Prism Proxy<br/>gRPC/HTTP Server]
AUTH[Authentication<br/>OIDC/mTLS]
ROUTER[Pattern Router<br/>Request Routing]
CACHE[Response Cache<br/>Optional]
end
subgraph "Pattern Plugins (Go)"
MEMSTORE[MemStore<br/>KeyValue]
REDIS[Redis<br/>KeyValue]
NATS[NATS<br/>PubSub]
KAFKA[Kafka<br/>Streaming]
POSTGRES[PostgreSQL<br/>Relational]
end
subgraph "Backend Infrastructure"
REDIS_BE[(Redis)]
NATS_BE[(NATS)]
KAFKA_BE[(Kafka)]
PG_BE[(PostgreSQL)]
end
APP1 & APP2 & APP3 --> PROXY
PROXY --> AUTH
AUTH --> ROUTER
ROUTER --> CACHE
CACHE --> MEMSTORE & REDIS & NATS & KAFKA & POSTGRES
REDIS --> REDIS_BE
NATS --> NATS_BE
KAFKA --> KAFKA_BE
POSTGRES --> PG_BE
style PROXY fill:#f96,stroke:#333,stroke-width:2px
style MEMSTORE fill:#9f6,stroke:#333,stroke-width:2px
style REDIS fill:#9f6,stroke:#333,stroke-width:2px
style NATS fill:#9f6,stroke:#333,stroke-width:2px
style KAFKA fill:#9f6,stroke:#333,stroke-width:2px
style POSTGRES fill:#9f6,stroke:#333,stroke-width:2px
prism/
├── Taskfile.yml # Task build system (run 'task --list')
├── testing/
│ └── Taskfile.yml # Test infrastructure (task test:help)
├── BUILDING.md # Build and test documentation
├── CLAUDE.md # Project philosophy and guidelines
├── proxy/ # Rust gateway (core of Prism)
├── patterns/ # Go backend patterns (pluggable)
│ ├── core/ # Shared pattern SDK
│ ├── memstore/ # In-memory key-value pattern
│ └── ...
├── proto/ # Protobuf definitions (source of truth)
├── tooling/ # Python utilities (validation, deployment)
├── docs-cms/ # Documentation source (ADRs, RFCs, memos)
├── docusaurus/ # Documentation site configuration
└── docs/ # Built documentation (GitHub Pages)
- Rust 1.70+ (for proxy)
- Go 1.21+ (for patterns)
- Python 3.10+ with uv (for tooling)
- Protocol Buffers compiler (protoc)
- Node.js 18+ (for documentation)
- Task build system
Install development tools: task install-tools
# Build everything (default target)
task
# Build in debug mode (faster)
task dev
# Build specific components
task proxy
task build-cmds
task patterns# Run all unit tests (fast, no infrastructure)
task test:unit-all
# Run integration tests (with infrastructure)
task test:test-integration-all
# Run acceptance tests (patterns against real backends)
task test:test-acceptance-all
# Run everything
task test:all
# Start/stop test infrastructure manually
task test:infra-up
task test:infra-down
task test:infra-statusSee testing/README.md for comprehensive test documentation.
Prism uses comprehensive parallel linting for maximum speed and code quality:
# Run all linters in parallel (fastest!)
task lint-parallel
# Run critical linters only (fast feedback)
task lint-go-fast
# Auto-fix issues
task lint-fix
# List all available tasks
task --list45+ Go linters across 10 categories run in parallel (3-4s vs 45+ min sequential). See MEMO-021 for details.
See BUILDING.md for complete documentation on building, testing, and development workflow.
Instead of manual provisioning:
message UserEvents {
option (prism.access_pattern) = "append_heavy";
option (prism.estimated_rps) = "10000";
option (prism.retention_days) = "90";
}Prism automatically:
- Selects optimal backend (Kafka for append-heavy)
- Provisions capacity for 10k RPS
- Configures retention policies
- mTLS by default: All inter-service communication encrypted
- PII tagging: Automatic handling of sensitive data
- Audit logging: Track all data access
- Fine-grained AuthZ: Per-namespace policies
message User {
string id = 1;
string email = 2 [(prism.pii) = "email"]; // Auto-encrypted
string name = 3 [(prism.pii) = "name"]; // Auto-masked in logs
}Each backend pattern is a self-contained Go module:
patterns/
├── core/ # Shared pattern SDK
├── memstore/ # In-memory key-value (testing)
├── redis/ # Redis backend
├── nats/ # Lightweight messaging
├── kafka/ # Event streaming
├── postgres/ # Relational data
└── ... # More backends coming
Adding a new backend? Implement the pattern interfaces and register with the SDK.
- BUILDING.md: Build, test, and development workflow
- testing/README.md: Comprehensive testing guide
- CLAUDE.md: Project philosophy and guidelines
- Architecture Decision Records: Design decisions
- RFCs: Technical proposals
- GitHub Pages: Live documentation site
# Using Task (recommended)
task docs-validate
# Or directly with uv
uv run tooling/validate_docs.pyThis validates frontmatter, links, and MDX syntax. See CLAUDE.md for details.
Push a version tag to automatically build and publish a release:
git tag -a v1.0.0 -m "Release v1.0.0"
git push origin v1.0.0This automatically:
- ✅ Builds binaries for Linux, macOS, Windows (amd64/arm64)
- ✅ Builds container images (scratch, distroless, alpine variants)
- ✅ Creates GitHub Release with all artifacts
- ✅ Pushes images to GitHub Container Registry
See .github/workflows/QUICKSTART-RELEASE.md for the complete release guide.
Prism CI/CD workflows can send status notifications via ntfy.sh - a simple, open-source notification service that requires no account creation.
Setup (3 steps, ~2 minutes):
-
Pick a unique topic name (keep it secret!):
# Use something random and hard to guess # Example: prism-ci-x7k9m2p4q8 -
Subscribe to your topic:
- Mobile: Install ntfy app (iOS/Android) and subscribe to your topic
- Desktop: Visit
https://ntfy.sh/your-topic-namein your browser - CLI:
ntfy subscribe your-topic-name
-
Add variable to GitHub repository:
- Go to Settings → Secrets and variables → Actions → Variables tab
- Click "New repository variable"
- Add repository variable:
- Name:
NTFY_TOPIC - Value:
your-topic-name(from step 1)
- Name:
That's it! The CI workflow will now send notifications for:
- ✅ CI pipeline status (pass/fail with job breakdown)
- 📚 Documentation deployments
- 🔔 Clickable links to workflow runs and deployed docs
Features:
- High priority alerts for failures
- Emoji indicators (✅/❌)
- Links open directly in the notification
- Works on mobile, desktop, and CLI
- Self-hostable (optional)
Note: Notifications are optional. If NTFY_TOPIC is not configured, workflows run normally without sending notifications.
- Rust proxy skeleton with gRPC server
- SQLite backend (simplest for testing)
- Basic KeyValue abstraction
- Protobuf codegen pipeline
- Local testing framework
- Kafka backend with producer/consumer
- NATS backend
- PostgreSQL backend
- Shadow traffic for migrations
- Admin UI basics
- Neptune (AWS) graph backend
- Client-originated configuration
- Auto-scaling and capacity planning
- Comprehensive observability
- Production deployment tooling
- Netflix Data Gateway
- Netflix KV Data Abstraction Layer
- Envoy Proxy
- Linkerd service mesh
[To be determined]
See CLAUDE.md for contribution guidelines and architectural principles.
