A guard dog that makes AI agents follow the rules.
Policy enforcement layer for AI agents; yielding better performance, reliability, and security without consuming model context.
- Deterministic rule-following for your agents.
- Better performance by moving rules out of context and into guarantees.
- Trigger alerts when agents repeatedly violate rules.
- LLM-as-a-judge for more dynamic governance.
Cupcake intercepts agent tool calls and evaluates them against user-defined rules written in Open Policy Agent (OPA) Rego. Agent actions can be blocked, or auto-corrected. Additional benefits include reactive automation for tasks you dont need to rely on the agent to conduct (like linting after a file edit).
Cupcake provides native integrations for multiple AI coding agents:
| Harness | Status | Integration Guide |
|---|---|---|
| Claude Code | ✅ Fully Supported | Setup Guide |
| Cursor | ✅ Fully Supported | Setup Guide |
| Factory AI | ✅ Fully Supported | Setup Guide |
| OpenCode | ✅ Fully Supported | Setup Guide |
| Gemini CLI | Coming soon | Awaiting PR |
Each harness uses native event formats—no normalization layer. Policies are physically separated by harness (policies/claude/, policies/cursor/, policies/factory/, policies/opencode/) to ensure clarity and full access to harness-specific capabilities.
Cupcake can be embedded in Python or JavaScript agent applications through native bindings. This enables integration with web-based agent frameworks like LangChain, Google ADK, NVIDIA NIM, Vercel AI SDK, and more.
| Language | Binding |
|---|---|
./cupcake-py |
|
./cupcake-ts |
Modern agents are powerful but inconsistent at following operational and security rules, especially as context grows. Cupcake turns the rules you already maintain (e.g., CLAUDE.md, AGENT.md, .cursor/rules) into enforceable guardrails that run before actions execute.
- Multi-harness support with first‑class integrations for Claude Code, Cursor, Factory AI, and OpenCode.
- Governance‑as‑code using OPA/Rego compiled to WebAssembly for fast, sandboxed evaluation.
- Enterprise‑ready controls: allow/deny/review, enriched audit trails for AI SOCs, and proactive warnings.
- Granular Tool Control: Prevent specific tools or arguments (e.g., blocking
rm -rf /). - MCP Support: Native governance for Model Context Protocol tools (e.g.,
mcp__memory__*,mcp__github__*). - LLM‑as‑Judge: Use a secondary LLM or agent to evaluate actions for more dynamic oversight.
- Guardrail Libraries: First‑class integrations with
NeMoandInvariantfor content and safety checks. - Observability: All inputs, signals, and decisions generate structured logs and evaluation traces for debugging.
Cupcake acts as an enforcement layer between your coding agents and their runtime environment via hooks directly in the agent action path.
Agent → (proposed action) → Cupcake → (policy decision) → Agent runtime
- Interception: The agent prepares to execute an action/tool-call (e.g.,
git push,fs_write). - Enrichment: Cupcake gathers real-time Signals—facts from the environment such as the current Git branch, CI status, or database metadata.
- Evaluation: The action and signals are packaged into a JSON input and evaluated against your Wasm policies in milliseconds.
Cupcake supports two evaluation models:
- Deterministic Policies: Policies are written in OPA/Rego and compiled to WebAssembly (Wasm) for fast, sandboxed evaluation. Writing Policies guide for implementation details.
- LLM‑as‑Judge: For simpler, yet more advanced, oversight of your rules, Cucpake can interject via a secondary LLM or agent to evaluate how an action should proceed. Cupcake Watchdog guide for implementation details.
Based on the evaluation, Cupcake returns one of four decisions to the agent runtime, along with a human-readable message:
- Allow: The action proceeds. Optionally, Cupcake can inject Context (e.g., "Remember: you're on the main branch") to guide subsequent behavior without blocking. Note: Context injection is currently supported in Claude Code but not Cursor.
- Block: The action is stopped. Cupcake sends Feedback explaining why it was blocked (e.g., "Tests must pass before pushing"), allowing the agent to self-correct.
- Warn: The action proceeds, but a warning is logged or displayed.
- Require Review: The action pauses until a human approves it.
- Sandboxed evaluation of untrusted inputs.
- Allow‑by‑default or deny‑by‑default modes configurable per project.
- No secret ingestion by default; policies can only read what signals expose.
- Auditability through logs and optional review workflows.
See the full Security Model.
Does Cupcake consume prompt/context tokens? No. Policies run outside the model and return structured decisions.
Is Cupcake tied to a specific model? No. Cupcake supports multiple AI coding agents with harness-specific integrations.
How fast is evaluation? Sub‑millisecond for cached policies in typical setups.
We welcome contributions! See CONTRIBUTING.md for guidelines.
Cupcake is developed by EQTYLab, with agentic safety research support by Trail of Bits.
