Your AI tools don't share a brain. Tribal Memory gives them one.
One memory store, many agents. Teach Claude Code something — Codex already knows it. That's not just persistence — it's cross-agent intelligence.
Claude Code stores architecture decisions → Codex recalls them instantly
Every AI coding assistant starts fresh. Claude Code doesn't know what you told Codex. Codex doesn't know what you told Claude. You repeat yourself constantly.
Tribal Memory is a shared memory server that any AI agent can connect to via MCP. Store a memory from one agent, recall it from another. It just works.
pip install tribalmemory # or: uv tool install tribalmemoryZero cloud. Zero API keys. Everything runs locally.
pip install tribalmemory
tribalmemory init
tribalmemory serveThat's it. No config editing required.
First run: FastEmbed downloads a ~130MB ONNX model on first use. After that, embeddings are instant and fully offline.
Server runs on http://localhost:18790.
Keep the server running in the background with automatic restarts:
# Install and start the service (systemd on Linux, launchd on macOS)
tribalmemory service install
# Check status
tribalmemory service status
# View logs
tribalmemory service logs
# Stop and remove
tribalmemory service uninstallUser-level services only — no root/sudo required.
Tribal Memory connects to AI agents via MCP (Model Context Protocol). Set up one or more of these:
# Auto-configure (recommended)
tribalmemory init --claude-codeOr manually — add to ~/.claude.json:
{
"mcpServers": {
"tribal-memory": {
"command": "tribalmemory-mcp"
}
}
}By default, Claude Code has the MCP tools available but won't use them unless you ask. Add --auto-capture to make Claude Code proactively store and recall memories:
tribalmemory init --claude-code --auto-captureThis appends instructions to ~/.claude/CLAUDE.md that tell Claude Code to:
- Auto-recall relevant memories at the start of each conversation
- Auto-store important decisions, architecture choices, and key facts
- Use
tribal_rememberandtribal_recallwithout being explicitly asked
Without --auto-capture, you can still use memory manually by saying "remember that..." or "what do you know about...".
Now Claude Code has persistent memory across sessions:
You: Remember that the auth service uses JWT with RS256
Claude: ✅ Stored.
--- next session ---
You: How does the auth service work?
Claude: Based on my memory, the auth service uses JWT with RS256...
# Auto-configure (recommended — resolves the full binary path automatically)
tribalmemory init --claude-desktopClaude Desktop doesn't inherit your shell PATH, so the bare command tribalmemory-mcp won't work. The init flag resolves the absolute path and writes it to claude_desktop_config.json for you.
# Configure Claude Code CLI and Claude Desktop together
tribalmemory init --claude-code --claude-desktopThe Codex CLI and desktop app share the same config file (~/.codex/config.toml), so one command sets up both:
# Auto-configure (recommended — works for both CLI and desktop app)
tribalmemory init --codexOr manually — add to ~/.codex/config.toml:
[mcp_servers.tribal-memory]
command = "tribalmemory-mcp"Note: The init flag resolves the full binary path automatically, so the desktop app finds the command even if it doesn't inherit your shell PATH.
That's it. Codex now shares the same memory store as Claude Code. Memories stored by one are instantly available to the other.
Auto-capture works for Codex too — it writes instructions to ~/.codex/AGENTS.md:
tribalmemory init --codex --auto-capture# Configure all agents + auto-capture + background service
tribalmemory init --claude-code --codex --auto-capture --serviceOne command: MCP configured for both agents, auto-capture enabled, server running as a service.
Tribal Memory includes a plugin for OpenClaw:
openclaw plugins install ./extensions/memory-tribal
openclaw config set plugins.slots.memory=memory-tribalHow memories are saved:
- Automatically — Memories are captured when the agent responds (preferences, decisions, key facts)
- On demand — Use
/remember <thing to remember>for immediate storage
/remember Joe's birthday is March 15
/remember Always use TypeScript for new projects
A hosted Tribal Memory service for teams — no server management, automatic syncing across machines. Star the repo for updates.
Run the interactive demo to see Tribal Memory in action:
./demo.shSee docs/demo-output.md for sample output.
Generated by tribalmemory init. Lives at ~/.tribal-memory/config.yaml:
instance_id: my-agent
embedding:
provider: fastembed
model: BAAI/bge-small-en-v1.5
dimensions: 384
db:
provider: lancedb
path: ~/.tribal-memory/lancedb
server:
host: 127.0.0.1
port: 18790
search:
lazy_spacy: true # 70x faster ingest (default: true)For better recall on personal conversations (finding people, places, dates), install spaCy:
pip install tribalmemory[spacy]
python -m spacy download en_core_web_smWith lazy spaCy (default), entity extraction is blazing fast:
- Ingest: Uses fast regex patterns (~2-3 seconds per conversation)
- Recall: Runs spaCy NER once on your query for accurate entity matching
This gives you the best of both worlds — fast ingestion AND accurate retrieval for personal conversations.
| Variable | Description |
|---|---|
TRIBAL_MEMORY_CONFIG |
Path to config file (default: ~/.tribal-memory/config.yaml) |
TRIBAL_MEMORY_INSTANCE_ID |
Override instance ID |
docker compose up -dMount a custom config.yaml to change embedding model or dimensions. See docker-compose.yml for all options.
All endpoints are under the /v1 prefix.
# Store a memory
curl -X POST http://localhost:18790/v1/remember \
-H "Content-Type: application/json" \
-d '{"content": "The database uses Postgres 16", "tags": ["infra"]}'
# Batch store (up to 1000 memories)
curl -X POST http://localhost:18790/v1/remember/batch \
-H "Content-Type: application/json" \
-d '{"memories": [
{"content": "Auth uses JWT with RS256"},
{"content": "Database is Postgres 16", "tags": ["infra"]}
]}'
# Search memories (auto-parses dates from query)
curl -X POST http://localhost:18790/v1/recall \
-H "Content-Type: application/json" \
-d '{"query": "what did we discuss last week?", "limit": 5}'
# Search with explicit temporal filter
curl -X POST http://localhost:18790/v1/recall \
-H "Content-Type: application/json" \
-d '{"query": "database decisions", "after": "2026-01-01", "limit": 5}'
# Health check
curl http://localhost:18790/v1/health
# Get stats
curl http://localhost:18790/v1/statsWhen connected via MCP, your AI gets these tools:
| Tool | Description |
|---|---|
tribal_store |
Store a new memory with deduplication |
tribal_recall |
Search memories (vector + graph expansion) |
tribal_recall_entity |
Query by entity name with hop traversal |
tribal_entity_graph |
Explore entity relationships |
tribal_correct |
Update/correct an existing memory |
tribal_forget |
Delete a memory |
tribal_stats |
Get memory statistics |
tribal_export |
Export memories to portable JSON |
tribal_import |
Import memories from a bundle |
tribal_sessions_ingest |
Index conversation transcripts |
from tribalmemory.services import create_memory_service
# FastEmbed uses BAAI/bge-small-en-v1.5 (384 dims) by default
service = create_memory_service(
instance_id="my-agent",
db_path="./memories",
)
# Store
result = await service.remember(
"User prefers TypeScript for web projects",
tags=["preference", "coding"]
)
# Recall
results = await service.recall("What language for web?")
for r in results:
print(f"{r.similarity_score:.2f}: {r.memory.content}")
# Correct
await service.correct(
original_id=result.memory_id,
corrected_content="User prefers TypeScript for web, Python for scripts"
)┌─────────────┐
│ Claude Code │──── MCP ────┐
└─────────────┘ │
┌─────────────┐ ▼
│ Codex CLI │──── MCP ───► Tribal Memory Server
└─────────────┘ ▲ (localhost:18790)
┌─────────────┐ │
│ OpenClaw │── plugin ───┘
└─────────────┘
The server is the single source of truth. Each agent connects as an instance. Memories are tagged with source_instance so you can see who learned what.
- Semantic search — Find memories by meaning, not keywords
- Cross-agent sharing — Memories from one agent are available to all
- Episode memories — Multi-session narratives with auto-summarization
- Graph search — Entity extraction + relationship traversal
- Graph visualization — Built-in web UI to explore your knowledge graph at
/graph - Hybrid retrieval — Vector + BM25 keyword search combined
- Zero cloud — Local ONNX embeddings via FastEmbed, no API keys needed
- Batch ingestion — Store up to 1000 memories in a single request
- Auto-temporal queries — "What happened last week?" auto-parses dates
- Session indexing — Index conversation transcripts for search
- Automatic deduplication — Won't store the same thing twice
- Memory corrections — Update outdated information with audit trail
- Temporal reasoning — Date extraction and time-based filtering
- Import/export — Portable JSON bundles with embedding metadata
- Token budgets — Smart context management to avoid LLM overload
- MCP server — Native integration with Claude Code, Codex, and more
- Benchmark tested — 100% accuracy on LoCoMo (1986 questions, all categories)
Episodes group related memories spanning multiple sessions into cohesive narratives with auto-generated summaries.
- Narrative continuity: Projects unfold across days/weeks — episodes capture the full story
- Automatic detection: Write-time hybrid detection (embedding similarity + LLM classification)
- Progressive summarization: Summaries update incrementally as you work
- Surfaces in recall: Episode summaries are indexed memories that show up in standard
recall()queries
Add to ~/.tribal-memory/config.yaml:
episodes:
enabled: true
summarizer_model: gpt-4o-mini
summarizer_provider: openaiSet your API key in ~/.tribal-memory/.env:
OPENAI_API_KEY=sk-... # For OpenAI
ANTHROPIC_API_KEY=sk-ant-... # For Anthropic
OLLAMA_BASE_URL=http://... # For Ollama (local, no key needed)Restart the server:
tribalmemory serveFull documentation: docs/episode-memories.md — Includes MCP tools, HTTP API, configuration reference, and troubleshooting.
Zero data leaves your machine:
- Embeddings computed locally (FastEmbed + ONNX runtime)
- Memories stored locally in LanceDB
- No API keys, no cloud services, no telemetry
git clone https://github.com/abbudjoe/TribalMemory.git
cd TribalMemory
pip install -e ".[dev]"
# Run tests
PYTHONPATH=src pytest
# Run linting
ruff check .
black --check .Business Source License 1.1 — see LICENSE