Skip to content

abbudjoe/TribalMemory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

297 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tribal Memory

Your AI tools don't share a brain. Tribal Memory gives them one.

One memory store, many agents. Teach Claude Code something — Codex already knows it. That's not just persistence — it's cross-agent intelligence.

One Brain, Two Agents — Claude Code stores memories, Codex recalls them
Claude Code stores architecture decisions → Codex recalls them instantly

PyPI License


Why

Every AI coding assistant starts fresh. Claude Code doesn't know what you told Codex. Codex doesn't know what you told Claude. You repeat yourself constantly.

Tribal Memory is a shared memory server that any AI agent can connect to via MCP. Store a memory from one agent, recall it from another. It just works.


Install

pip install tribalmemory    # or: uv tool install tribalmemory

Quick Start

Zero cloud. Zero API keys. Everything runs locally.

pip install tribalmemory
tribalmemory init
tribalmemory serve

That's it. No config editing required.

First run: FastEmbed downloads a ~130MB ONNX model on first use. After that, embeddings are instant and fully offline.

Server runs on http://localhost:18790.

Running as a Service (Optional)

Keep the server running in the background with automatic restarts:

# Install and start the service (systemd on Linux, launchd on macOS)
tribalmemory service install

# Check status
tribalmemory service status

# View logs
tribalmemory service logs

# Stop and remove
tribalmemory service uninstall

User-level services only — no root/sudo required.


Integrations

Tribal Memory connects to AI agents via MCP (Model Context Protocol). Set up one or more of these:

Claude Code (CLI)

# Auto-configure (recommended)
tribalmemory init --claude-code

Or manually — add to ~/.claude.json:

{
  "mcpServers": {
    "tribal-memory": {
      "command": "tribalmemory-mcp"
    }
  }
}

Auto-Capture

By default, Claude Code has the MCP tools available but won't use them unless you ask. Add --auto-capture to make Claude Code proactively store and recall memories:

tribalmemory init --claude-code --auto-capture

This appends instructions to ~/.claude/CLAUDE.md that tell Claude Code to:

  • Auto-recall relevant memories at the start of each conversation
  • Auto-store important decisions, architecture choices, and key facts
  • Use tribal_remember and tribal_recall without being explicitly asked

Without --auto-capture, you can still use memory manually by saying "remember that..." or "what do you know about...".

Now Claude Code has persistent memory across sessions:

You: Remember that the auth service uses JWT with RS256
Claude: ✅ Stored.

--- next session ---

You: How does the auth service work?
Claude: Based on my memory, the auth service uses JWT with RS256...

Claude Desktop

# Auto-configure (recommended — resolves the full binary path automatically)
tribalmemory init --claude-desktop

Claude Desktop doesn't inherit your shell PATH, so the bare command tribalmemory-mcp won't work. The init flag resolves the absolute path and writes it to claude_desktop_config.json for you.

Both Claude Apps

# Configure Claude Code CLI and Claude Desktop together
tribalmemory init --claude-code --claude-desktop

Codex (CLI & Desktop)

The Codex CLI and desktop app share the same config file (~/.codex/config.toml), so one command sets up both:

# Auto-configure (recommended — works for both CLI and desktop app)
tribalmemory init --codex

Or manually — add to ~/.codex/config.toml:

[mcp_servers.tribal-memory]
command = "tribalmemory-mcp"

Note: The init flag resolves the full binary path automatically, so the desktop app finds the command even if it doesn't inherit your shell PATH.

That's it. Codex now shares the same memory store as Claude Code. Memories stored by one are instantly available to the other.

Auto-capture works for Codex too — it writes instructions to ~/.codex/AGENTS.md:

tribalmemory init --codex --auto-capture

Set Up Everything at Once

# Configure all agents + auto-capture + background service
tribalmemory init --claude-code --codex --auto-capture --service

One command: MCP configured for both agents, auto-capture enabled, server running as a service.

OpenClaw

Tribal Memory includes a plugin for OpenClaw:

openclaw plugins install ./extensions/memory-tribal
openclaw config set plugins.slots.memory=memory-tribal

How memories are saved:

  • Automatically — Memories are captured when the agent responds (preferences, decisions, key facts)
  • On demand — Use /remember <thing to remember> for immediate storage
/remember Joe's birthday is March 15
/remember Always use TypeScript for new projects

Cloud Setup (Coming Soon)

A hosted Tribal Memory service for teams — no server management, automatic syncing across machines. Star the repo for updates.


Demo

Run the interactive demo to see Tribal Memory in action:

./demo.sh

See docs/demo-output.md for sample output.


Self-Hosted Setup

Configuration

Generated by tribalmemory init. Lives at ~/.tribal-memory/config.yaml:

instance_id: my-agent

embedding:
  provider: fastembed
  model: BAAI/bge-small-en-v1.5
  dimensions: 384

db:
  provider: lancedb
  path: ~/.tribal-memory/lancedb

server:
  host: 127.0.0.1
  port: 18790

search:
  lazy_spacy: true    # 70x faster ingest (default: true)

Entity Extraction (Optional)

For better recall on personal conversations (finding people, places, dates), install spaCy:

pip install tribalmemory[spacy]
python -m spacy download en_core_web_sm

With lazy spaCy (default), entity extraction is blazing fast:

  • Ingest: Uses fast regex patterns (~2-3 seconds per conversation)
  • Recall: Runs spaCy NER once on your query for accurate entity matching

This gives you the best of both worlds — fast ingestion AND accurate retrieval for personal conversations.

Environment Variables

Variable Description
TRIBAL_MEMORY_CONFIG Path to config file (default: ~/.tribal-memory/config.yaml)
TRIBAL_MEMORY_INSTANCE_ID Override instance ID

Docker

docker compose up -d

Mount a custom config.yaml to change embedding model or dimensions. See docker-compose.yml for all options.


Tribal API

HTTP Endpoints

All endpoints are under the /v1 prefix.

# Store a memory
curl -X POST http://localhost:18790/v1/remember \
  -H "Content-Type: application/json" \
  -d '{"content": "The database uses Postgres 16", "tags": ["infra"]}'

# Batch store (up to 1000 memories)
curl -X POST http://localhost:18790/v1/remember/batch \
  -H "Content-Type: application/json" \
  -d '{"memories": [
    {"content": "Auth uses JWT with RS256"},
    {"content": "Database is Postgres 16", "tags": ["infra"]}
  ]}'

# Search memories (auto-parses dates from query)
curl -X POST http://localhost:18790/v1/recall \
  -H "Content-Type: application/json" \
  -d '{"query": "what did we discuss last week?", "limit": 5}'

# Search with explicit temporal filter
curl -X POST http://localhost:18790/v1/recall \
  -H "Content-Type: application/json" \
  -d '{"query": "database decisions", "after": "2026-01-01", "limit": 5}'

# Health check
curl http://localhost:18790/v1/health

# Get stats
curl http://localhost:18790/v1/stats

MCP Tools

When connected via MCP, your AI gets these tools:

Tool Description
tribal_store Store a new memory with deduplication
tribal_recall Search memories (vector + graph expansion)
tribal_recall_entity Query by entity name with hop traversal
tribal_entity_graph Explore entity relationships
tribal_correct Update/correct an existing memory
tribal_forget Delete a memory
tribal_stats Get memory statistics
tribal_export Export memories to portable JSON
tribal_import Import memories from a bundle
tribal_sessions_ingest Index conversation transcripts

Python API

from tribalmemory.services import create_memory_service

# FastEmbed uses BAAI/bge-small-en-v1.5 (384 dims) by default
service = create_memory_service(
    instance_id="my-agent",
    db_path="./memories",
)

# Store
result = await service.remember(
    "User prefers TypeScript for web projects",
    tags=["preference", "coding"]
)

# Recall
results = await service.recall("What language for web?")
for r in results:
    print(f"{r.similarity_score:.2f}: {r.memory.content}")

# Correct
await service.correct(
    original_id=result.memory_id,
    corrected_content="User prefers TypeScript for web, Python for scripts"
)

Architecture

┌─────────────┐
│  Claude Code │──── MCP ────┐
└─────────────┘              │
┌─────────────┐              ▼
│  Codex CLI   │──── MCP ───► Tribal Memory Server
└─────────────┘              ▲  (localhost:18790)
┌─────────────┐              │
│  OpenClaw    │── plugin ───┘
└─────────────┘

The server is the single source of truth. Each agent connects as an instance. Memories are tagged with source_instance so you can see who learned what.


Features

  • Semantic search — Find memories by meaning, not keywords
  • Cross-agent sharing — Memories from one agent are available to all
  • Episode memories — Multi-session narratives with auto-summarization
  • Graph search — Entity extraction + relationship traversal
  • Graph visualizationBuilt-in web UI to explore your knowledge graph at /graph
  • Hybrid retrieval — Vector + BM25 keyword search combined
  • Zero cloud — Local ONNX embeddings via FastEmbed, no API keys needed
  • Batch ingestion — Store up to 1000 memories in a single request
  • Auto-temporal queries — "What happened last week?" auto-parses dates
  • Session indexing — Index conversation transcripts for search
  • Automatic deduplication — Won't store the same thing twice
  • Memory corrections — Update outdated information with audit trail
  • Temporal reasoning — Date extraction and time-based filtering
  • Import/export — Portable JSON bundles with embedding metadata
  • Token budgets — Smart context management to avoid LLM overload
  • MCP server — Native integration with Claude Code, Codex, and more
  • Benchmark tested100% accuracy on LoCoMo (1986 questions, all categories)

Episode Memories

Episodes group related memories spanning multiple sessions into cohesive narratives with auto-generated summaries.

Why Episodes?

  • Narrative continuity: Projects unfold across days/weeks — episodes capture the full story
  • Automatic detection: Write-time hybrid detection (embedding similarity + LLM classification)
  • Progressive summarization: Summaries update incrementally as you work
  • Surfaces in recall: Episode summaries are indexed memories that show up in standard recall() queries

Quick Start

Add to ~/.tribal-memory/config.yaml:

episodes:
  enabled: true
  summarizer_model: gpt-4o-mini
  summarizer_provider: openai

Set your API key in ~/.tribal-memory/.env:

OPENAI_API_KEY=sk-...        # For OpenAI
ANTHROPIC_API_KEY=sk-ant-... # For Anthropic
OLLAMA_BASE_URL=http://...   # For Ollama (local, no key needed)

Restart the server:

tribalmemory serve

Full documentation: docs/episode-memories.md — Includes MCP tools, HTTP API, configuration reference, and troubleshooting.


Privacy

Zero data leaves your machine:

  • Embeddings computed locally (FastEmbed + ONNX runtime)
  • Memories stored locally in LanceDB
  • No API keys, no cloud services, no telemetry

Development

git clone https://github.com/abbudjoe/TribalMemory.git
cd TribalMemory
pip install -e ".[dev]"

# Run tests
PYTHONPATH=src pytest

# Run linting
ruff check .
black --check .

License

Business Source License 1.1 — see LICENSE

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors