Skip to content

anicka-net/recall

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Recall

Note: This is a personal learning project. I'm using it to learn C# and .NET by building something I actually use daily. Works on Linux, macOS, and Windows. PRs welcome but expect rough edges.

A personal diary MCP server with persistent memory, semantic search, health tracking, and grocery integration.

Recall gives Claude (or any MCP-compatible AI) access to your past conversations. Every new conversation starts with relevant context retrieved automatically, so the AI always knows what you've discussed before.

How it works

Recall is an MCP server that stores diary entries in SQLite with vector embeddings for semantic search (all-MiniLM-L6-v2 via ONNX Runtime). Tools use an action-parameter pattern to keep the tool count low:

Tool Actions Purpose
diary write, update, get, pin Create, edit, fetch, or pin diary entries
diary_search context, query, list Find entries: conversation start context, keyword search, recent list
diary_day view, plan, summarize Day-level view, set plans, store daily summaries
diary_time Current date/time (so the AI knows when it is)
health recent, query, log_migraine, log_period Health/fitness data, migraine and cycle tracking
rohlik_search search, last_minute Search Rohlik.cz grocery products
rohlik_cart view, add, remove, update, check Shopping cart management
rohlik_orders history, detail, upcoming Order history and tracking
rohlik_delivery info, slots, reserve Delivery timeslots
rohlik_account premium, announcements, bags, checkout, shopping_list Account info
rohlik_checkout submit, pay Order submission and payment (TOTP-protected)

Quick start

Prerequisites

Install and connect to Claude Code

git clone https://github.com/anicka-net/recall.git
cd recall
dotnet build

# Register with Claude Code
claude mcp add --transport stdio --scope user recall -- \
    dotnet run --project /path/to/recall/src/Recall.Server/Recall.Server.csproj

The diary tools are now available in your Claude Code conversations.

Configuration

Config file: ~/.recall/config.json

{
  "databasePath": "~/.recall/recall.db",
  "systemPrompt": "Custom instructions for the AI",
  "promptFile": "~/.recall/prompt.txt",
  "autoContextLimit": 5,
  "searchResultLimit": 10,
  "tools": ["diary", "health"],
  "rohlikUsername": "user@example.com",
  "rohlikPassword": "password",
  "rohlikBaseUrl": "https://www.rohlik.cz"
}

All fields are optional. Defaults work out of the box.

The tools array controls which tool modules are registered. Valid modules: diary, health, rohlik. When omitted or empty, all modules are enabled. Rohlik additionally requires credentials to be present.

Data storage

All data is stored locally at ~/.recall/recall.db (SQLite). Nothing leaves your machine unless you configure a remote transport.

Semantic search model

Recall uses all-MiniLM-L6-v2 for semantic search. Download the model files:

mkdir -p ~/.recall/models/all-MiniLM-L6-v2
curl -L -o ~/.recall/models/all-MiniLM-L6-v2/model.onnx \
    https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/resolve/main/onnx/model.onnx
curl -L -o ~/.recall/models/all-MiniLM-L6-v2/vocab.txt \
    https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/resolve/main/vocab.txt

Without the model files, Recall falls back to substring (LIKE) search.

Authentication and access control

Recall has two layers of auth: transport-level (who can connect) and tool-level (what they can see).

Transport: local vs. remote

Local (stdio): Claude Code connects via stdio — no transport auth needed.

Remote (HTTP): OAuth 2.1 with PKCE or API keys control who can connect at all.

# OAuth setup (for claude.ai)
dotnet run -- oauth setup
# Prompts for a passphrase and your server's public URL

# API keys (for other clients)
dotnet run -- key create "my-client"    # Generate a key (shown once)
dotnet run -- key list                  # List all keys
dotnet run -- key revoke 3              # Revoke by ID

Both methods work simultaneously. Without any configured, the server runs open.

Tool-level: four-tier access

Every diary/health tool call (except diary_time) requires a secret parameter. The server hashes it and compares against configured hashes to determine the access level. When no auth hashes are configured at all, the server defaults to privileged access (for local-only use). Rohlik tools have no access control (they're gated by whether credentials are configured).

Level Diary Health Pin/foundational Write restricted
Privileged all unscoped entries, all tiers yes yes yes
Coding unrestricted + unscoped only no no no
Scoped own scope only no no no
None (no/bad secret) rejected rejected rejected rejected

Configure in ~/.recall/config.json:

{
  "guardianSecretHash": "<sha256 of privileged passphrase>",
  "codingSecretHash": "<sha256 of coding passphrase>"
}

Generate hashes with: echo -n "your-passphrase" | sha256sum

Injecting the secret into Claude Code — use a PreToolUse hook that adds the coding secret to every Recall tool call. Example ~/.claude/hooks/recall-secret.sh:

#!/bin/bash
INPUT=$(cat)
TOOL_NAME=$(echo "$INPUT" | jq -r '.tool_name')

# diary_time needs no secret
if [ "$TOOL_NAME" = "mcp__claude_ai_Recall__diary_time" ]; then
    exit 0
fi

TOOL_INPUT=$(echo "$INPUT" | jq -r '.tool_input')
UPDATED=$(echo "$TOOL_INPUT" | jq '. + {"secret": "YOUR_CODING_SECRET"}')

jq -n --argjson updated "$UPDATED" '{
  "hookSpecificOutput": {
    "hookEventName": "PreToolUse",
    "permissionDecision": "allow",
    "updatedInput": $updated
  }
}'

Register it in ~/.claude/settings.json:

{
  "hooks": {
    "PreToolUse": [{
      "matcher": "mcp__claude_ai_Recall__.*",
      "hooks": [{"type": "command", "command": "/path/to/recall-secret.sh"}]
    }]
  }
}

For claude.ai (privileged), include the privileged secret in the system prompt with instructions to pass it as the secret parameter on every tool call.

Scoped users (isolated projects)

Multiple users or projects can share a single Recall instance with full isolation. Each scoped user gets their own diary space — they can only see and write entries tagged with their scope. The admin (privileged user) sees global entries by default and can opt into viewing any scope.

Level Sees Writes Health
Privileged global entries (all scopes on request) any scope yes
Coding global unrestricted only global unrestricted no
Scoped only their scope only their scope no

Setting up a scope:

# Generate credentials
PASSPHRASE=$(openssl rand -base64 24)
HASH=$(echo -n "$PASSPHRASE" | sha256sum | cut -d' ' -f1)
echo "Passphrase: $PASSPHRASE"
echo "Hash: $HASH"

Add to ~/.recall/config.json on the server:

{
  "Scopes": [
    { "Name": "project-name", "SecretHash": "<hash from above>" }
  ]
}

Restart the service. Give the passphrase to the user — see ONBOARDING.md for their setup steps.

How scoped isolation works:

  • Entries have a scope column (NULL = global, "project-name" = scoped)
  • Scoped users' writes are auto-tagged with their scope
  • Queries filter by scope at the SQL level — ONNX vector search only scans matching entries, not the entire database
  • Existing entries (scope = NULL) remain visible to Privileged and Coding, invisible to scoped users

Important: PreToolUse hook updatedInput replaces, not merges. The hook script must read the original tool_input from stdin, merge the secret into it with jq, and return the full combined object. If you only return {secret: "..."}, all other parameters (like content) are lost and the tool call fails. See ONBOARDING.md for the correct hook script.

Tiered aging

As entries accumulate, older ones are automatically deprioritized so recent context stays relevant. Entries move through three tiers:

Tier Name Age diary_search context diary_search query diary_search list
0 hot < 7 days recent + search yes yes
1 warm 7–90 days search only yes no
2 cold > 90 days no yes no

Aging runs lazily at the start of each diary_search action=context call. Thresholds are configurable:

{
  "tierHotDays": 7,
  "tierWarmDays": 90
}

Pinning and foundational entries

Pinned entries (diary action=pin id=42) are exempt from auto-aging — they stay at their current tier forever.

Foundational entries (diary action=pin id=42 foundational=true) are pinned and additionally always shown at the top of diary_search action=context for privileged users. They appear as a compact index (ID + first line), with diary action=get available for full content. Useful for reference material that should always be in context.

Both features are privileged-only.

Calendar

The calendar provides a day-level view of your diary: plans, summaries, and linked entries.

How it works

Each date can have a calendar entry with two fields:

  • Plans — what you intend to do (can be set for future dates)
  • Summary — a condensed record of what happened (written after the fact)

Diary entries are linked to calendar days automatically by their creation date. Call diary_day action=view with a date to see everything in one view.

Access control

Calendar entries follow the same access model as diary entries:

Level Sees Can create
Privileged all calendar entries (any scope, restricted or not) restricted and unrestricted
Coding unscoped, unrestricted only unrestricted only
Scoped own scope only own scope only

A single date can have multiple calendar entries with different access levels. For example, a privileged user might write both a restricted summary (containing private details) and an unrestricted summary (safe for coding sessions) for the same day. The restricted flag on calendar entries controls visibility the same way it does for diary entries.

Generating summaries

Summaries are stored text, not auto-generated. The intended workflow:

  1. Call diary_day action=view for a date to review the entries
  2. Write a concise summary
  3. Call diary_day action=summarize to store it

This can be done manually or scripted as a batch operation across historical dates.

Health data integration

Recall can store daily health summaries from Fitbit (sleep, heart rate, activity, SpO2), weather data, menstrual cycle tracking, and migraine tracking. The tools/ directory contains:

Script Purpose
tools/fitbit-sync.py Fetch Fitbit data via API, enrich with weather (Open-Meteo), write to recall.db
tools/fitbit-cron.sh Hourly cron job: sync + push to remote + migraine risk prediction
tools/cycle.py Menstrual cycle tracking with predictions
tools/migraine.py Migraine logging, history with weather/cycle context, 7-day risk prediction

Health data appears through the health MCP tool (actions: recent, query). Migraines and period starts can be logged directly via the health tool (actions: log_migraine, log_period) or via the CLI scripts.

Migraine prediction

migraine.py predict fetches a 7-day weather forecast and scores each day for migraine risk based on:

  • Barometric pressure drops (>5 hPa = moderate, >10 hPa = high risk)
  • Menstrual cycle phase (menstrual = high, ovulatory = moderate)
  • Temperature swings (>12°C daily range)

Risk predictions are written to the calendar table so they appear in diary_day views.

Deployment

Local (stdio)

Just register with Claude Code as shown in Quick Start. No server process needed.

Remote (HTTP)

Publish a self-contained binary and run it as a service. Framework-dependent builds won't work unless the exact .NET runtime is installed on the server.

# Build (self-contained for Linux x64)
dotnet publish src/Recall.Server/Recall.Server.csproj \
    -c Release --self-contained -r linux-x64 -o publish/

# Copy to server
rsync -av publish/ server:~/recall-server/

# Run
./publish/Recall.Server --http --port 3000

systemd service (/etc/systemd/system/recall.service):

[Unit]
Description=Recall MCP Server
After=network.target

[Service]
Type=exec
User=recall
WorkingDirectory=/opt/recall
ExecStart=/opt/recall/Recall.Server --http --port 3000
Restart=on-failure
RestartSec=5
Environment=DOTNET_ENVIRONMENT=Production

[Install]
WantedBy=multi-user.target
sudo systemctl enable --now recall

Put a reverse proxy (Apache/nginx/Caddy) in front for TLS. Claude.ai requires HTTPS.

Updating a running server

dotnet publish src/Recall.Server/Recall.Server.csproj \
    -c Release --self-contained -r linux-x64 -o publish/
rsync -av publish/ server:~/recall-server/
# On the server:
sudo systemctl restart recall

The SQLite database is preserved across restarts. Schema migrations run automatically on startup.

Proxying external MCP servers

Recall can act as an OAuth gateway for other MCP servers. Any stdio-based MCP server wrapped in supergateway can be proxied through Recall's existing auth — no additional OAuth setup needed. (Note: Rohlik grocery tools are now integrated directly into the recall server binary, not proxied.)

How it works:

claude.ai ──HTTPS──▶ reverse proxy ──▶ Recall ──▶ supergateway ──▶ stdio MCP server
                                    (OAuth gate)   (Streamable HTTP)

Step 1: Set up the external MCP server with supergateway

# Install on the server
mkdir -p ~/my-mcp && cd ~/my-mcp
npm init -y
npm install supergateway @some/mcp-server

# Create credentials file (if needed)
cat > ~/.config/my-mcp.env << 'EOF'
MY_USERNAME=user@example.com
MY_PASSWORD=secret
EOF
chmod 600 ~/.config/my-mcp.env

Step 2: Create a systemd user unit

# ~/.config/systemd/user/my-mcp.service
[Unit]
Description=My MCP Server (supergateway)
After=network.target

[Service]
Type=exec
EnvironmentFile=%h/.config/my-mcp.env
ExecStart=/usr/bin/npx --prefix %h/my-mcp supergateway \
    --stdio "node %h/my-mcp/node_modules/@some/mcp-server/dist/index.js" \
    --port 8385 \
    --outputTransport streamableHttp \
    --streamableHttpPath /mcp \
    --logLevel info
Restart=on-failure
RestartSec=5

[Install]
WantedBy=default.target
systemctl --user daemon-reload
systemctl --user enable --now my-mcp
loginctl enable-linger  # persist after logout

Step 3: Add the proxy to Recall's config

In ~/.recall/config.json:

{
  "mcpProxies": [
    { "prefix": "my-mcp", "target": "http://127.0.0.1:8385" }
  ]
}

Restart Recall. Tools are now available at https://your-server/recall/my-mcp/mcp using the same OAuth token.

Step 4: Add a claude.ai connector

In claude.ai settings, add an MCP connector with URL:

https://your-server/recall/my-mcp/mcp

It uses the same OAuth passphrase as Recall — no separate auth setup.

Important notes:

  • Use --outputTransport streamableHttp in supergateway — claude.ai prefers Streamable HTTP over SSE
  • No proxy config is needed in Apache/nginx — the existing /recall/ proxy rule covers all subpaths
  • When mcpProxies is absent or empty, zero proxy code runs — no impact on Recall
  • Credentials for external services go in env files, never in Recall's config or repo

Architecture

┌─────────────────────────────────────────────┐
│             Recall MCP Server               │
│  SQLite + vector search + OAuth 2.1         │
│                                             │
│  /sse, /message  →  diary, health, rohlik   │
│  /{prefix}/*     →  proxy to backend (opt)  │
└──────┬──────────────────┬───────────────────┘
       │                  │
  MCP protocol      reverse proxy (opt)
       │                  │
  ┌────┴────────┐   ┌─────┴──────────┐
  │ Claude Code │   │  supergateway  │
  │  claude.ai  │   │  (external     │
  │  any client │   │   MCP servers) │
  └─────────────┘   └────────────────┘

Building

dotnet build                       # Build
dotnet test src/Recall.Tests       # Run tests

License

MIT

About

Personal diary MCP server with persistent memory and full-text search

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors