Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 25 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,10 @@
A LanceDB-backed OpenClaw memory plugin that stores preferences, decisions, and project context, then auto-recalls them in future sessions.

[![OpenClaw Plugin](https://img.shields.io/badge/OpenClaw-Plugin-blue)](https://github.com/openclaw/openclaw)
[![OpenClaw 2026.3+](https://img.shields.io/badge/OpenClaw-2026.3%2B-brightgreen)](https://github.com/openclaw/openclaw)
[![npm version](https://img.shields.io/npm/v/memory-lancedb-pro)](https://www.npmjs.com/package/memory-lancedb-pro)
[![LanceDB](https://img.shields.io/badge/LanceDB-Vectorstore-orange)](https://lancedb.com)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)

<h2>⚡ <a href="https://github.com/CortexReach/memory-lancedb-pro/releases/tag/v1.1.0-beta.10">v1.1.0-beta.10 — OpenClaw 2026.3+ Hook Adaptation</a></h2>

<p>
✅ Fully adapted for OpenClaw 2026.3+ new plugin architecture<br>
🔄 Uses <code>before_prompt_build</code> hooks (replacing deprecated <code>before_agent_start</code>)<br>
🩺 Run <code>openclaw doctor --fix</code> after upgrading
</p>

[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)

</div>
Expand Down Expand Up @@ -129,6 +120,31 @@ Add to your `openclaw.json`:
- `extractMinMessages: 2` → extraction triggers in normal two-turn chats
- `sessionMemory.enabled: false` → avoids polluting retrieval with session summaries on day one

---

## ⚠️ Dual-Memory Architecture (Important)

When `memory-lancedb-pro` is active, your system has **two independent memory layers** that do **not** auto-sync:

| Memory Layer | Storage | What it's for | Recallable? |
|---|---|---|---|
| **Plugin Memory** | LanceDB (vector store) | Semantic recall via `memory_recall` / auto-recall | ✅ Yes |
| **Markdown Memory** | `MEMORY.md`, `memory/YYYY-MM-DD.md` | Startup context, human-readable journal | ❌ Not auto-recalled |

**Key principle:**
> A fact written into `memory/YYYY-MM-DD.md` is visible in startup context, but `memory_recall` **will not find it** unless it was also written via `memory_store` (or auto-captured by the plugin).

**What this means for you:**
- Need semantic recall? → Use `memory_store` or let auto-capture do it
- `memory/YYYY-MM-DD.md` → treat as a **daily journal / log**, not a recall source
- `MEMORY.md` → curated human-readable reference, not a recall source
- Plugin memory → **primary recall source** for `memory_recall` and auto-recall

**If you want your Markdown memories to be recallable**, use the import command:
```bash
npx memory-lancedb-pro memory-pro import-markdown
```

Validate & restart:

```bash
Expand Down Expand Up @@ -620,19 +636,6 @@ Sometimes the model may echo the injected `<relevant-memories>` block.

</details>

<details>
<summary><strong>Auto-recall timeout tuning</strong></summary>

Auto-recall has a configurable timeout (default 5s) to prevent stalling agent startup. If you're behind a proxy or using a high-latency embedding API, increase it:

```json
{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecallTimeoutMs": 8000 } } } } }
```

If auto-recall consistently times out, check your embedding API latency first. The timeout only affects the automatic injection path — manual `memory_recall` tool calls are not affected.

</details>

<details>
<summary><strong>Session Memory</strong></summary>

Expand Down
125 changes: 125 additions & 0 deletions cli.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1036,6 +1036,131 @@ export function registerMemoryCLI(program: Command, context: CLIContext): void {
}
});

/**
* import-markdown: Import memories from Markdown memory files into the plugin store.
* Targets MEMORY.md and memory/YYYY-MM-DD.md files found in OpenClaw workspaces.
*/
memory
.command("import-markdown [workspace-glob]")
.description("Import memories from Markdown files (MEMORY.md, memory/YYYY-MM-DD.md) into the plugin store")
.option("--dry-run", "Show what would be imported without importing")
.option("--scope <scope>", "Import into specific scope (default: global)")
.option(
"--openclaw-home <path>",
"OpenClaw home directory (default: ~/.openclaw)",
)
.action(async (workspaceGlob, options) => {
const openclawHome = options.openclawHome
? path.resolve(options.openclawHome)
: path.join(homedir(), ".openclaw");

const workspaceDir = path.join(openclawHome, "workspace");
let imported = 0;
let skipped = 0;
let foundFiles = 0;

if (!context.embedder) {
console.error(
"import-markdown requires an embedder. Use via plugin CLI or ensure embedder is configured.",
);
process.exit(1);
}

// Scan workspace directories
let workspaceEntries: string[];
try {
const fsPromises = await import("node:fs/promises");
workspaceEntries = await fsPromises.readdir(workspaceDir, { withFileTypes: true });
} catch {
console.error(`Failed to read workspace directory: ${workspaceDir}`);
process.exit(1);
}

// Collect all markdown files to scan
const mdFiles: Array<{ filePath: string; scope: string }> = [];

for (const entry of workspaceEntries) {
if (!entry.isDirectory()) continue;
if (workspaceGlob && !entry.name.includes(workspaceGlob)) continue;

const workspacePath = path.join(workspaceDir, entry.name);

// MEMORY.md
const memoryMd = path.join(workspacePath, "MEMORY.md");
try {
const { stat } = await import("node:fs/promises");
await stat(memoryMd);
mdFiles.push({ filePath: memoryMd, scope: entry.name });
} catch { /* not found */ }

// memory/ directory
const memoryDir = path.join(workspacePath, "memory");
try {
const { stat } = await import("node:fs/promises");
const stats = await stat(memoryDir);
if (stats.isDirectory()) {
const { readdir } = await import("node:fs/promises");
const files = await readdir(memoryDir);
for (const f of files) {
if (f.endsWith(".md") && /^\d{4}-\d{2}-\d{2}/.test(f)) {
mdFiles.push({ filePath: path.join(memoryDir, f), scope: entry.name });
}
}
}
} catch { /* not found */ }
}

if (mdFiles.length === 0) {
console.log("No Markdown memory files found.");
return;
}

const targetScope = options.scope || "global";

// Parse each file for memory entries (lines starting with "- ")
for (const { filePath, scope } of mdFiles) {
foundFiles++;
const { readFile } = await import("node:fs/promises");
const content = await readFile(filePath, "utf-8");
const lines = content.split("\n");

for (const line of lines) {
// Skip non-memory lines
if (!line.startsWith("- ")) continue;
const text = line.slice(2).trim();
if (text.length < 5) { skipped++; continue; }

if (options.dryRun) {
console.log(` [dry-run] would import: ${text.slice(0, 80)}...`);
imported++;
continue;
}

try {
const vector = await context.embedder!.embedQuery(text);
await context.store.store({
text,
vector,
importance: 0.7,
category: "other",
scope: targetScope,
metadata: { importedFrom: filePath, sourceScope: scope },
});
imported++;
} catch (err) {
console.warn(` Failed to import: ${text.slice(0, 60)}... — ${err}`);
skipped++;
}
}
}

if (options.dryRun) {
console.log(`\nDRY RUN — found ${foundFiles} files, ${imported} entries would be imported, ${skipped} skipped`);
} else {
console.log(`\nImport complete: ${imported} imported, ${skipped} skipped (scanned ${foundFiles} files)`);
}
});

// Re-embed an existing LanceDB into the current target DB (A/B testing)
memory
.command("reembed")
Expand Down
Loading