+
+# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin
+
+**Asistente de Memoria IA para Agentes [OpenClaw](https://github.com/openclaw/openclaw)**
+
+*Dale a tu agente de IA un cerebro que realmente recuerda — entre sesiones, entre agentes, a lo largo del tiempo.*
+
+Un plugin de memoria para OpenClaw respaldado por LanceDB que almacena preferencias, decisiones y contexto de proyectos, y los recupera automáticamente en sesiones futuras.
+
+[](https://github.com/openclaw/openclaw)
+[](https://www.npmjs.com/package/memory-lancedb-pro)
+[](https://lancedb.com)
+[](LICENSE)
+
+[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)
+
+
+
+---
+
+## ¿Por qué memory-lancedb-pro?
+
+La mayoría de los agentes de IA tienen amnesia. Olvidan todo en el momento en que inicias un nuevo chat.
+
+**memory-lancedb-pro** es un plugin de memoria a largo plazo de nivel productivo para OpenClaw que convierte a tu agente en un **Asistente de Memoria IA** — captura automáticamente lo que importa, deja que el ruido se desvanezca naturalmente y recupera el recuerdo correcto en el momento adecuado. Sin etiquetado manual, sin complicaciones de configuración.
+
+### Tu Asistente de Memoria IA en acción
+
+**Sin memoria — cada sesión comienza desde cero:**
+
+> **Tú:** "Usa tabulaciones para la indentación, siempre agrega manejo de errores."
+> *(siguiente sesión)*
+> **Tú:** "¡Ya te lo dije — tabulaciones, no espacios!" 😤
+> *(siguiente sesión)*
+> **Tú:** "...en serio, tabulaciones. Y manejo de errores. Otra vez."
+
+**Con memory-lancedb-pro — tu agente aprende y recuerda:**
+
+> **Tú:** "Usa tabulaciones para la indentación, siempre agrega manejo de errores."
+> *(siguiente sesión — el agente recupera automáticamente tus preferencias)*
+> **Agente:** *(aplica silenciosamente tabulaciones + manejo de errores)* ✅
+> **Tú:** "¿Por qué elegimos PostgreSQL en lugar de MongoDB el mes pasado?"
+> **Agente:** "Basándome en nuestra discusión del 12 de febrero, las razones principales fueron..." ✅
+
+Esa es la diferencia que hace un **Asistente de Memoria IA** — aprende tu estilo, recuerda decisiones pasadas y entrega respuestas personalizadas sin que tengas que repetirte.
+
+### ¿Qué más puede hacer?
+
+| | Lo que obtienes |
+|---|---|
+| **Auto-Capture** | Tu agente aprende de cada conversación — sin necesidad de `memory_store` manual |
+| **Smart Extraction** | Clasificación de 6 categorías impulsada por LLM: perfiles, preferencias, entidades, eventos, casos, patrones |
+| **Olvido Inteligente** | Modelo de decaimiento Weibull — los recuerdos importantes permanecen, el ruido se desvanece naturalmente |
+| **Recuperación Híbrida** | Búsqueda vectorial + BM25 de texto completo, fusionada con reranking por cross-encoder |
+| **Inyección de Contexto** | Los recuerdos relevantes aparecen automáticamente antes de cada respuesta |
+| **Aislamiento Multi-Scope** | Límites de memoria por agente, por usuario, por proyecto |
+| **Cualquier Proveedor** | OpenAI, Jina, Gemini, Ollama, o cualquier API compatible con OpenAI |
+| **Kit Completo de Herramientas** | CLI, respaldo, migración, actualización, exportar/importar — listo para producción |
+
+---
+
+## Inicio Rápido
+
+### Opción A: Script de instalación con un clic (Recomendado)
+
+El **[script de instalación](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)** mantenido por la comunidad gestiona la instalación, actualización y reparación en un solo comando:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
+bash setup-memory.sh
+```
+
+> Consulta [Ecosistema](#ecosistema) más abajo para ver la lista completa de escenarios que cubre el script y otras herramientas de la comunidad.
+
+### Opción B: Instalación Manual
+
+**Mediante la CLI de OpenClaw (recomendado):**
+```bash
+openclaw plugins install memory-lancedb-pro@beta
+```
+
+**O mediante npm:**
+```bash
+npm i memory-lancedb-pro@beta
+```
+> Si usas npm, también necesitarás agregar el directorio de instalación del plugin como una ruta **absoluta** en `plugins.load.paths` en tu `openclaw.json`. Este es el problema de configuración más común.
+
+Agrega a tu `openclaw.json`:
+
+```json
+{
+ "plugins": {
+ "slots": { "memory": "memory-lancedb-pro" },
+ "entries": {
+ "memory-lancedb-pro": {
+ "enabled": true,
+ "config": {
+ "embedding": {
+ "provider": "openai-compatible",
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "text-embedding-3-small"
+ },
+ "autoCapture": true,
+ "autoRecall": true,
+ "smartExtraction": true,
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000,
+ "sessionMemory": { "enabled": false }
+ }
+ }
+ }
+ }
+}
+```
+
+**¿Por qué estos valores predeterminados?**
+- `autoCapture` + `smartExtraction` → tu agente aprende de cada conversación automáticamente
+- `autoRecall` → los recuerdos relevantes se inyectan antes de cada respuesta
+- `extractMinMessages: 2` → la extracción se activa en chats normales de dos turnos
+- `sessionMemory.enabled: false` → evita contaminar la recuperación con resúmenes de sesión desde el primer día
+
+Valida y reinicia:
+
+```bash
+openclaw config validate
+openclaw gateway restart
+openclaw logs --follow --plain | grep "memory-lancedb-pro"
+```
+
+Deberías ver:
+- `memory-lancedb-pro: smart extraction enabled`
+- `memory-lancedb-pro@...: plugin registered`
+
+¡Listo! Tu agente ahora tiene memoria a largo plazo.
+
+
+Más rutas de instalación (usuarios existentes, actualizaciones)
+
+**¿Ya usas OpenClaw?**
+
+1. Agrega el plugin con una entrada **absoluta** en `plugins.load.paths`
+2. Vincula el slot de memoria: `plugins.slots.memory = "memory-lancedb-pro"`
+3. Verifica: `openclaw plugins info memory-lancedb-pro && openclaw memory-pro stats`
+
+**¿Actualizando desde una versión anterior a v1.1.0?**
+
+```bash
+# 1) Respaldo
+openclaw memory-pro export --scope global --output memories-backup.json
+# 2) Ejecución de prueba
+openclaw memory-pro upgrade --dry-run
+# 3) Ejecutar actualización
+openclaw memory-pro upgrade
+# 4) Verificar
+openclaw memory-pro stats
+```
+
+Consulta `CHANGELOG-v1.1.0.md` para los cambios de comportamiento y la justificación de la actualización.
+
+
+
+
+Importación rápida para Bot de Telegram (clic para expandir)
+
+Si usas la integración de Telegram de OpenClaw, la forma más fácil es enviar un comando de importación directamente al Bot principal en lugar de editar la configuración manualmente.
+
+Envía este mensaje (en inglés, ya que es un prompt para el bot):
+
+```text
+Help me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro
+
+Requirements:
+1. Set it as the only active memory plugin
+2. Use Jina for embedding
+3. Use Jina for reranker
+4. Use gpt-4o-mini for the smart-extraction LLM
+5. Enable autoCapture, autoRecall, smartExtraction
+6. extractMinMessages=2
+7. sessionMemory.enabled=false
+8. captureAssistant=false
+9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
+10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
+11. Generate the final openclaw.json config directly, not just an explanation
+```
+
+
+
+---
+
+## Ecosistema
+
+memory-lancedb-pro es el plugin principal. La comunidad ha desarrollado herramientas a su alrededor para hacer que la configuración y el uso diario sean aún más sencillos:
+
+### Script de Instalación — Instala, actualiza y repara con un solo clic
+
+> **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**
+
+Mucho más que un simple instalador — el script gestiona de forma inteligente una amplia variedad de escenarios reales:
+
+| Tu situación | Lo que hace el script |
+|---|---|
+| Nunca instalado | Descarga nueva → instala dependencias → elige configuración → escribe en openclaw.json → reinicia |
+| Instalado vía `git clone`, atascado en un commit antiguo | `git fetch` + `checkout` automático a la última versión → reinstala dependencias → verifica |
+| La configuración tiene campos inválidos | Auto-detección mediante filtro de esquema, elimina campos no soportados |
+| Instalado vía `npm` | Omite la actualización de git, te recuerda ejecutar `npm update` por tu cuenta |
+| CLI de `openclaw` rota por configuración inválida | Alternativa: lee la ruta del workspace directamente del archivo `openclaw.json` |
+| `extensions/` en lugar de `plugins/` | Auto-detección de la ubicación del plugin desde la configuración o el sistema de archivos |
+| Ya está actualizado | Solo ejecuta verificaciones de salud, sin cambios |
+
+```bash
+bash setup-memory.sh # Instalar o actualizar
+bash setup-memory.sh --dry-run # Solo previsualización
+bash setup-memory.sh --beta # Incluir versiones preliminares
+bash setup-memory.sh --uninstall # Revertir configuración y eliminar plugin
+```
+
+Configuraciones preestablecidas de proveedores: **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, o usa tu propia API compatible con OpenAI. Para la referencia completa (incluyendo `--ref`, `--selfcheck-only` y más), consulta el [README del script de instalación](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup).
+
+### Skill para Claude Code / OpenClaw — Configuración Guiada por IA
+
+> **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**
+
+Instala este skill y tu agente de IA (Claude Code u OpenClaw) obtiene un conocimiento profundo de cada característica de memory-lancedb-pro. Solo di **"ayúdame a habilitar la mejor configuración"** y obtén:
+
+- **Flujo de configuración guiado en 7 pasos** con 4 planes de despliegue:
+ - Potencia Total (Jina + OpenAI) / Económico (reranker gratuito de SiliconFlow) / Simple (solo OpenAI) / Totalmente Local (Ollama, sin costo de API)
+- **Las 9 herramientas MCP** usadas correctamente: `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(el conjunto completo de herramientas requiere `enableManagementTools: true` — la configuración de Inicio Rápido predeterminada expone las 4 herramientas principales)*
+- **Prevención de errores comunes**: habilitación del plugin en el workspace, `autoRecall` desactivado por defecto, caché de jiti, variables de entorno, aislamiento de scope, y más
+
+**Instalar para Claude Code:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro
+```
+
+**Instalar para OpenClaw:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill
+```
+
+---
+
+## Tutorial en Video
+
+> Recorrido completo: instalación, configuración y funcionamiento interno de la recuperación híbrida.
+
+[](https://youtu.be/MtukF1C8epQ)
+**https://youtu.be/MtukF1C8epQ**
+
+[](https://www.bilibili.com/video/BV1zUf2BGEgn/)
+**https://www.bilibili.com/video/BV1zUf2BGEgn/**
+
+---
+
+## Arquitectura
+
+```
+┌─────────────────────────────────────────────────────────┐
+│ index.ts (Entry Point) │
+│ Plugin Registration · Config Parsing · Lifecycle Hooks │
+└────────┬──────────┬──────────┬──────────┬───────────────┘
+ │ │ │ │
+ ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐
+ │ store │ │embedder│ │retriever│ │ scopes │
+ │ .ts │ │ .ts │ │ .ts │ │ .ts │
+ └────────┘ └────────┘ └────────┘ └─────────────┘
+ │ │
+ ┌────▼───┐ ┌─────▼──────────┐
+ │migrate │ │noise-filter.ts │
+ │ .ts │ │adaptive- │
+ └────────┘ │retrieval.ts │
+ └────────────────┘
+ ┌─────────────┐ ┌──────────┐
+ │ tools.ts │ │ cli.ts │
+ │ (Agent API) │ │ (CLI) │
+ └─────────────┘ └──────────┘
+```
+
+> Para un análisis detallado de la arquitectura completa, consulta [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md).
+
+
+Referencia de Archivos (clic para expandir)
+
+| Archivo | Propósito |
+| --- | --- |
+| `index.ts` | Punto de entrada del plugin. Se registra con la API de Plugins de OpenClaw, analiza la configuración, monta hooks de ciclo de vida |
+| `openclaw.plugin.json` | Metadatos del plugin + declaración completa de configuración con JSON Schema |
+| `cli.ts` | Comandos CLI: `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |
+| `src/store.ts` | Capa de almacenamiento LanceDB. Creación de tablas / Indexación FTS / Búsqueda vectorial / Búsqueda BM25 / CRUD |
+| `src/embedder.ts` | Abstracción de embeddings. Compatible con cualquier proveedor de API compatible con OpenAI |
+| `src/retriever.ts` | Motor de recuperación híbrida. Vector + BM25 → Fusión Híbrida → Rerank → Decaimiento de Ciclo de Vida → Filtro |
+| `src/scopes.ts` | Control de acceso multi-scope |
+| `src/tools.ts` | Definiciones de herramientas del agente: `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + herramientas de gestión |
+| `src/noise-filter.ts` | Filtra rechazos del agente, meta-preguntas, saludos y contenido de baja calidad |
+| `src/adaptive-retrieval.ts` | Determina si una consulta necesita recuperación de memoria |
+| `src/migrate.ts` | Migración desde `memory-lancedb` integrado a Pro |
+| `src/smart-extractor.ts` | Extracción de 6 categorías impulsada por LLM con almacenamiento en capas L0/L1/L2 y deduplicación en dos etapas |
+| `src/decay-engine.ts` | Modelo de decaimiento exponencial estirado de Weibull |
+| `src/tier-manager.ts` | Promoción/degradación en tres niveles: Peripheral ↔ Working ↔ Core |
+
+
+
+---
+
+## Características Principales
+
+### Recuperación Híbrida
+
+```
+Query → embedQuery() ─┐
+ ├─→ Hybrid Fusion → Rerank → Lifecycle Decay Boost → Length Norm → Filter
+Query → BM25 FTS ─────┘
+```
+
+- **Búsqueda Vectorial** — similitud semántica mediante LanceDB ANN (distancia coseno)
+- **Búsqueda de Texto Completo BM25** — coincidencia exacta de palabras clave mediante índice FTS de LanceDB
+- **Fusión Híbrida** — puntuación vectorial como base, los resultados de BM25 reciben un impulso ponderado (no es RRF estándar — ajustado para calidad de recuperación en el mundo real)
+- **Pesos Configurables** — `vectorWeight`, `bm25Weight`, `minScore`
+
+### Reranking con Cross-Encoder
+
+- Adaptadores integrados para **Jina**, **SiliconFlow**, **Voyage AI** y **Pinecone**
+- Compatible con cualquier endpoint compatible con Jina (por ejemplo, Hugging Face TEI, DashScope)
+- Puntuación híbrida: 60% cross-encoder + 40% puntuación fusionada original
+- Degradación elegante: recurre a similitud coseno en caso de fallo de la API
+
+### Pipeline de Puntuación Multi-Etapa
+
+| Etapa | Efecto |
+| --- | --- |
+| **Fusión Híbrida** | Combina recuperación semántica y de coincidencia exacta |
+| **Rerank con Cross-Encoder** | Promueve resultados semánticamente precisos |
+| **Impulso por Decaimiento de Ciclo de Vida** | Frescura Weibull + frecuencia de acceso + importancia × confianza |
+| **Normalización de Longitud** | Evita que entradas largas dominen (ancla: 500 caracteres) |
+| **Puntuación Mínima Estricta** | Elimina resultados irrelevantes (predeterminado: 0.35) |
+| **Diversidad MMR** | Similitud coseno > 0.85 → degradado |
+
+### Extracción Inteligente de Memoria (v1.1.0)
+
+- **Extracción de 6 Categorías con LLM**: perfil, preferencias, entidades, eventos, casos, patrones
+- **Almacenamiento en Capas L0/L1/L2**: L0 (índice de una oración) → L1 (resumen estructurado) → L2 (narrativa completa)
+- **Deduplicación en Dos Etapas**: pre-filtro de similitud vectorial (≥0.7) → decisión semántica por LLM (CREATE/MERGE/SKIP)
+- **Fusión por Categoría**: `profile` siempre se fusiona, `events`/`cases` son solo de adición
+
+### Gestión del Ciclo de Vida de la Memoria (v1.1.0)
+
+- **Motor de Decaimiento Weibull**: puntuación compuesta = recencia + frecuencia + valor intrínseco
+- **Promoción en Tres Niveles**: `Peripheral ↔ Working ↔ Core` con umbrales configurables
+- **Refuerzo por Acceso**: los recuerdos frecuentemente recuperados decaen más lentamente (estilo repetición espaciada)
+- **Vida Media Modulada por Importancia**: los recuerdos importantes decaen más lentamente
+
+### Aislamiento Multi-Scope
+
+- Scopes integrados: `global`, `agent:`, `custom:`, `project:`, `user:`
+- Control de acceso a nivel de agente mediante `scopes.agentAccess`
+- Predeterminado: cada agente accede a `global` + su propio scope `agent:`
+
+### Auto-Capture y Auto-Recall
+
+- **Auto-Capture** (`agent_end`): extrae preferencia/hecho/decisión/entidad de las conversaciones, deduplica, almacena hasta 3 por turno
+- **Auto-Recall** (`before_agent_start`): inyecta contexto `` (hasta 3 entradas)
+
+### Filtrado de Ruido y Recuperación Adaptativa
+
+- Filtra contenido de baja calidad: rechazos del agente, meta-preguntas, saludos
+- Omite la recuperación para saludos, comandos slash, confirmaciones simples, emojis
+- Fuerza la recuperación para palabras clave de memoria ("recuerda", "anteriormente", "la última vez")
+- Umbrales adaptados a CJK (chino: 6 caracteres vs inglés: 15 caracteres)
+
+---
+
+
+Comparación con memory-lancedb integrado (clic para expandir)
+
+| Característica | `memory-lancedb` integrado | **memory-lancedb-pro** |
+| --- | :---: | :---: |
+| Búsqueda vectorial | Sí | Sí |
+| Búsqueda de texto completo BM25 | - | Sí |
+| Fusión híbrida (Vector + BM25) | - | Sí |
+| Rerank con cross-encoder (multi-proveedor) | - | Sí |
+| Impulso por recencia y decaimiento temporal | - | Sí |
+| Normalización de longitud | - | Sí |
+| Diversidad MMR | - | Sí |
+| Aislamiento multi-scope | - | Sí |
+| Filtrado de ruido | - | Sí |
+| Recuperación adaptativa | - | Sí |
+| CLI de gestión | - | Sí |
+| Memoria de sesión | - | Sí |
+| Embeddings adaptados a la tarea | - | Sí |
+| **Extracción Inteligente con LLM (6 categorías)** | - | Sí (v1.1.0) |
+| **Decaimiento Weibull + Promoción por Niveles** | - | Sí (v1.1.0) |
+| Cualquier embedding compatible con OpenAI | Limitado | Sí |
+
+
+
+---
+
+## Configuración
+
+
+Ejemplo de Configuración Completa
+
+```json
+{
+ "embedding": {
+ "apiKey": "${JINA_API_KEY}",
+ "model": "jina-embeddings-v5-text-small",
+ "baseURL": "https://api.jina.ai/v1",
+ "dimensions": 1024,
+ "taskQuery": "retrieval.query",
+ "taskPassage": "retrieval.passage",
+ "normalized": true
+ },
+ "dbPath": "~/.openclaw/memory/lancedb-pro",
+ "autoCapture": true,
+ "autoRecall": true,
+ "retrieval": {
+ "mode": "hybrid",
+ "vectorWeight": 0.7,
+ "bm25Weight": 0.3,
+ "minScore": 0.3,
+ "rerank": "cross-encoder",
+ "rerankApiKey": "${JINA_API_KEY}",
+ "rerankModel": "jina-reranker-v3",
+ "rerankEndpoint": "https://api.jina.ai/v1/rerank",
+ "rerankProvider": "jina",
+ "candidatePoolSize": 20,
+ "recencyHalfLifeDays": 14,
+ "recencyWeight": 0.1,
+ "filterNoise": true,
+ "lengthNormAnchor": 500,
+ "hardMinScore": 0.35,
+ "timeDecayHalfLifeDays": 60,
+ "reinforcementFactor": 0.5,
+ "maxHalfLifeMultiplier": 3
+ },
+ "enableManagementTools": false,
+ "scopes": {
+ "default": "global",
+ "definitions": {
+ "global": { "description": "Shared knowledge" },
+ "agent:discord-bot": { "description": "Discord bot private" }
+ },
+ "agentAccess": {
+ "discord-bot": ["global", "agent:discord-bot"]
+ }
+ },
+ "sessionMemory": {
+ "enabled": false,
+ "messageCount": 15
+ },
+ "smartExtraction": true,
+ "llm": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "gpt-4o-mini",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000
+}
+```
+
+
+
+
+Proveedores de Embedding
+
+Funciona con **cualquier API de embedding compatible con OpenAI**:
+
+| Proveedor | Modelo | URL Base | Dimensiones |
+| --- | --- | --- | --- |
+| **Jina** (recomendado) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |
+| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |
+| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |
+| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |
+| **Ollama** (local) | `nomic-embed-text` | `http://localhost:11434/v1` | específico del proveedor |
+
+
+
+
+Proveedores de Rerank
+
+El reranking con cross-encoder admite múltiples proveedores mediante `rerankProvider`:
+
+| Proveedor | `rerankProvider` | Modelo de Ejemplo |
+| --- | --- | --- |
+| **Jina** (predeterminado) | `jina` | `jina-reranker-v3` |
+| **SiliconFlow** (nivel gratuito disponible) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |
+| **Voyage AI** | `voyage` | `rerank-2.5` |
+| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |
+
+Cualquier endpoint de rerank compatible con Jina también funciona — configura `rerankProvider: "jina"` y apunta `rerankEndpoint` a tu servicio (por ejemplo, Hugging Face TEI, DashScope `qwen3-rerank`).
+
+
+
+
+Smart Extraction (LLM) — v1.1.0
+
+Cuando `smartExtraction` está habilitado (predeterminado: `true`), el plugin utiliza un LLM para extraer y clasificar recuerdos de forma inteligente en lugar de disparadores basados en regex.
+
+| Campo | Tipo | Predeterminado | Descripción |
+|-------|------|----------------|-------------|
+| `smartExtraction` | boolean | `true` | Habilitar/deshabilitar la extracción de 6 categorías impulsada por LLM |
+| `llm.auth` | string | `api-key` | `api-key` usa `llm.apiKey` / `embedding.apiKey`; `oauth` usa un archivo de token OAuth con alcance de plugin por defecto |
+| `llm.apiKey` | string | *(recurre a `embedding.apiKey`)* | Clave API para el proveedor de LLM |
+| `llm.model` | string | `openai/gpt-oss-120b` | Nombre del modelo LLM |
+| `llm.baseURL` | string | *(recurre a `embedding.baseURL`)* | Endpoint de la API del LLM |
+| `llm.oauthProvider` | string | `openai-codex` | ID del proveedor OAuth usado cuando `llm.auth` es `oauth` |
+| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | Archivo de token OAuth usado cuando `llm.auth` es `oauth` |
+| `llm.timeoutMs` | number | `30000` | Tiempo de espera de solicitud LLM en milisegundos |
+| `extractMinMessages` | number | `2` | Mensajes mínimos antes de que se active la extracción |
+| `extractMaxChars` | number | `8000` | Máximo de caracteres enviados al LLM |
+
+
+Configuración de `llm` con OAuth (usa la caché de inicio de sesión existente de Codex / ChatGPT para llamadas al LLM):
+```json
+{
+ "llm": {
+ "auth": "oauth",
+ "oauthProvider": "openai-codex",
+ "model": "gpt-5.4",
+ "oauthPath": "${HOME}/.openclaw/.memory-lancedb-pro/oauth.json",
+ "timeoutMs": 30000
+ }
+}
+```
+
+Notas para `llm.auth: "oauth"`:
+
+- `llm.oauthProvider` es actualmente `openai-codex`.
+- Los tokens OAuth se almacenan por defecto en `~/.openclaw/.memory-lancedb-pro/oauth.json`.
+- Puedes configurar `llm.oauthPath` si deseas almacenar ese archivo en otra ubicación.
+- `auth login` guarda una copia de la configuración anterior de `llm` con api-key junto al archivo OAuth, y `auth logout` restaura esa copia cuando está disponible.
+- Cambiar de `api-key` a `oauth` no transfiere automáticamente `llm.baseURL`. Configúralo manualmente en modo OAuth solo cuando intencionalmente quieras un backend personalizado compatible con ChatGPT/Codex.
+
+
+
+
+Configuración del Ciclo de Vida (Decaimiento + Nivel)
+
+| Campo | Predeterminado | Descripción |
+|-------|----------------|-------------|
+| `decay.recencyHalfLifeDays` | `30` | Vida media base para el decaimiento de recencia Weibull |
+| `decay.frequencyWeight` | `0.3` | Peso de la frecuencia de acceso en la puntuación compuesta |
+| `decay.intrinsicWeight` | `0.3` | Peso de `importancia × confianza` |
+| `decay.betaCore` | `0.8` | Beta de Weibull para memorias `core` |
+| `decay.betaWorking` | `1.0` | Beta de Weibull para memorias `working` |
+| `decay.betaPeripheral` | `1.3` | Beta de Weibull para memorias `peripheral` |
+| `tier.coreAccessThreshold` | `10` | Mínimo de recuperaciones antes de promover a `core` |
+| `tier.peripheralAgeDays` | `60` | Umbral de antigüedad para degradar memorias inactivas |
+
+
+
+
+Refuerzo por Acceso
+
+Los recuerdos frecuentemente recuperados decaen más lentamente (estilo repetición espaciada).
+
+Claves de configuración (bajo `retrieval`):
+- `reinforcementFactor` (0-2, predeterminado: `0.5`) — establece `0` para deshabilitar
+- `maxHalfLifeMultiplier` (1-10, predeterminado: `3`) — límite máximo de vida media efectiva
+
+
+
+---
+
+## Comandos CLI
+
+```bash
+openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]
+openclaw memory-pro search "query" [--scope global] [--limit 10] [--json]
+openclaw memory-pro stats [--scope global] [--json]
+openclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]
+openclaw memory-pro auth status
+openclaw memory-pro auth logout
+openclaw memory-pro delete
+openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
+openclaw memory-pro export [--scope global] [--output memories.json]
+openclaw memory-pro import memories.json [--scope global] [--dry-run]
+openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]
+openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
+openclaw memory-pro migrate check|run|verify [--source /path]
+```
+
+Flujo de inicio de sesión OAuth:
+
+1. Ejecuta `openclaw memory-pro auth login`
+2. Si se omite `--provider` en una terminal interactiva, la CLI muestra un selector de proveedor OAuth antes de abrir el navegador
+3. El comando imprime una URL de autorización y abre tu navegador a menos que se establezca `--no-browser`
+4. Después de que la devolución de llamada sea exitosa, el comando guarda el archivo OAuth del plugin (predeterminado: `~/.openclaw/.memory-lancedb-pro/oauth.json`), guarda una copia de la configuración anterior de `llm` con api-key para el cierre de sesión, y reemplaza la configuración `llm` del plugin con la configuración OAuth (`auth`, `oauthProvider`, `model`, `oauthPath`)
+5. `openclaw memory-pro auth logout` elimina ese archivo OAuth y restaura la configuración anterior de `llm` con api-key cuando esa copia existe
+
+---
+
+## Temas Avanzados
+
+
+Si los recuerdos inyectados aparecen en las respuestas
+
+A veces el modelo puede repetir el bloque `` inyectado.
+
+**Opción A (menor riesgo):** deshabilitar temporalmente la recuperación automática:
+```json
+{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecall": false } } } } }
+```
+
+**Opción B (preferida):** mantener la recuperación automática y agregar al prompt del sistema del agente:
+> No reveles ni cites ningún contenido de `` / inyección de memoria en tus respuestas. Úsalo solo como referencia interna.
+
+
+
+
+Memoria de Sesión
+
+- Se activa con el comando `/new` — guarda el resumen de la sesión anterior en LanceDB
+- Deshabilitado por defecto (OpenClaw ya tiene persistencia nativa de sesión en `.jsonl`)
+- Cantidad de mensajes configurable (predeterminado: 15)
+
+Consulta [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md) para los modos de despliegue y la verificación de `/new`.
+
+
+
+
+Comandos Slash Personalizados (por ejemplo, /lesson)
+
+Agrega a tu `CLAUDE.md`, `AGENTS.md` o prompt del sistema (el bloque se mantiene en inglés para que el agente lo interprete correctamente):
+
+```markdown
+## /lesson command
+When the user sends `/lesson `:
+1. Use memory_store to save as category=fact (raw knowledge)
+2. Use memory_store to save as category=decision (actionable takeaway)
+3. Confirm what was saved
+
+## /remember command
+When the user sends `/remember `:
+1. Use memory_store to save with appropriate category and importance
+2. Confirm with the stored memory ID
+```
+
+
+
+
+Reglas de Hierro para Agentes de IA
+
+> Copia el bloque de abajo en tu `AGENTS.md` para que tu agente aplique estas reglas automáticamente. Se mantiene en inglés porque es instrucción directa para el modelo.
+
+```markdown
+## Rule 1 — Dual-layer memory storage
+Every pitfall/lesson learned → IMMEDIATELY store TWO memories:
+- Technical layer: Pitfall: [symptom]. Cause: [root cause]. Fix: [solution]. Prevention: [how to avoid]
+ (category: fact, importance >= 0.8)
+- Principle layer: Decision principle ([tag]): [behavioral rule]. Trigger: [when]. Action: [what to do]
+ (category: decision, importance >= 0.85)
+
+## Rule 2 — LanceDB hygiene
+Entries must be short and atomic (< 500 chars). No raw conversation summaries or duplicates.
+
+## Rule 3 — Recall before retry
+On ANY tool failure, ALWAYS memory_recall with relevant keywords BEFORE retrying.
+
+## Rule 4 — Confirm target codebase
+Confirm you are editing memory-lancedb-pro vs built-in memory-lancedb before changes.
+
+## Rule 5 — Clear jiti cache after plugin code changes
+After modifying .ts files under plugins/, MUST run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.
+```
+
+
+
+
+Esquema de la Base de Datos
+
+Tabla LanceDB `memories`:
+
+| Campo | Tipo | Descripción |
+| --- | --- | --- |
+| `id` | string (UUID) | Clave primaria |
+| `text` | string | Texto del recuerdo (indexado con FTS) |
+| `vector` | float[] | Vector de embedding |
+| `category` | string | Categoría de almacenamiento: `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |
+| `scope` | string | Identificador de scope (por ejemplo, `global`, `agent:main`) |
+| `importance` | float | Puntuación de importancia 0-1 |
+| `timestamp` | int64 | Marca de tiempo de creación (ms) |
+| `metadata` | string (JSON) | Metadatos extendidos |
+
+Claves comunes de `metadata` en v1.1.0: `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`
+
+> **Nota sobre categorías:** El campo de nivel superior `category` usa 6 categorías de almacenamiento. Las 6 etiquetas semánticas de categoría de Smart Extraction (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`) se almacenan en `metadata.memory_category`.
+
+
+
+
+Solución de Problemas
+
+### "Cannot mix BigInt and other types" (LanceDB / Apache Arrow)
+
+En LanceDB 0.26+, algunas columnas numéricas pueden devolverse como `BigInt`. Actualiza a **memory-lancedb-pro >= 1.0.14** — este plugin ahora convierte los valores usando `Number(...)` antes de realizar operaciones aritméticas.
+
+
+
+---
+
+## Documentación
+
+| Documento | Descripción |
+| --- | --- |
+| [Manual de Integración con OpenClaw](docs/openclaw-integration-playbook.md) | Modos de despliegue, verificación, matriz de regresión |
+| [Análisis de la Arquitectura de Memoria](docs/memory_architecture_analysis.md) | Análisis detallado de la arquitectura completa |
+| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | Cambios de comportamiento en v1.1.0 y justificación de la actualización |
+| [Fragmentación de Contexto Largo](docs/long-context-chunking.md) | Estrategia de fragmentación para documentos largos |
+
+---
+
+## Beta: Smart Memory v1.1.0
+
+> Estado: Beta — disponible mediante `npm i memory-lancedb-pro@beta`. Los usuarios estables en `latest` no se ven afectados.
+
+| Característica | Descripción |
+|----------------|-------------|
+| **Smart Extraction** | Extracción de 6 categorías impulsada por LLM con metadatos L0/L1/L2. Recurre a regex cuando está deshabilitado. |
+| **Puntuación de Ciclo de Vida** | Decaimiento Weibull integrado en la recuperación — los recuerdos de alta frecuencia y alta importancia se clasifican mejor. |
+| **Gestión de Niveles** | Sistema de tres niveles (Core → Working → Peripheral) con promoción/degradación automática. |
+
+Comentarios: [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · Revertir: `npm i memory-lancedb-pro@latest`
+
+---
+
+## Dependencias
+
+| Paquete | Propósito |
+| --- | --- |
+| `@lancedb/lancedb` ≥0.26.2 | Base de datos vectorial (ANN + FTS) |
+| `openai` ≥6.21.0 | Cliente de API de Embedding compatible con OpenAI |
+| `@sinclair/typebox` 0.34.48 | Definiciones de tipos con JSON Schema |
+
+---
+
+## Contributors
+
+
+
+
+
+
+
+
+
+
+
+
+
+Full list: [Contributors](https://github.com/CortexReach/memory-lancedb-pro/graphs/contributors)
+
+## Star History
+
+
+
+
+
+
+
+
+
+## Licencia
+
+MIT
+
+---
+
+## Mi Código QR de WeChat
+
+
diff --git a/README_FR.md b/README_FR.md
new file mode 100644
index 00000000..19f5fc45
--- /dev/null
+++ b/README_FR.md
@@ -0,0 +1,773 @@
+
+
+# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin
+
+**Assistant Mémoire IA pour les Agents [OpenClaw](https://github.com/openclaw/openclaw)**
+
+*Donnez à votre agent IA un cerveau qui se souvient vraiment — entre les sessions, entre les agents, dans le temps.*
+
+Un plugin de mémoire long terme pour OpenClaw basé sur LanceDB qui stocke les préférences, les décisions et le contexte du projet, puis les rappelle automatiquement dans les sessions futures.
+
+[](https://github.com/openclaw/openclaw)
+[](https://www.npmjs.com/package/memory-lancedb-pro)
+[](https://lancedb.com)
+[](LICENSE)
+
+[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)
+
+
+
+---
+
+## Pourquoi memory-lancedb-pro ?
+
+La plupart des agents IA souffrent d'amnésie. Ils oublient tout dès que vous démarrez une nouvelle conversation.
+
+**memory-lancedb-pro** est un plugin de mémoire long terme de niveau production pour OpenClaw qui transforme votre agent en un véritable **Assistant Mémoire IA** — il capture automatiquement ce qui compte, laisse le bruit s'estomper naturellement et retrouve le bon souvenir au bon moment. Pas d'étiquetage manuel, pas de configuration compliquée.
+
+### Votre Assistant Mémoire IA en action
+
+**Sans mémoire — chaque session repart de zéro :**
+
+> **Vous :** « Utilise des tabulations pour l'indentation, ajoute toujours la gestion d'erreurs. »
+> *(session suivante)*
+> **Vous :** « Je t'ai déjà dit — des tabulations, pas des espaces ! » 😤
+> *(session suivante)*
+> **Vous :** « …sérieusement, des tabulations. Et la gestion d'erreurs. Encore. »
+
+**Avec memory-lancedb-pro — votre agent apprend et se souvient :**
+
+> **Vous :** « Utilise des tabulations pour l'indentation, ajoute toujours la gestion d'erreurs. »
+> *(session suivante — l'agent rappelle automatiquement vos préférences)*
+> **Agent :** *(applique silencieusement tabulations + gestion d'erreurs)* ✅
+> **Vous :** « Pourquoi avons-nous choisi PostgreSQL plutôt que MongoDB le mois dernier ? »
+> **Agent :** « Selon notre discussion du 12 février, les raisons principales étaient… » ✅
+
+Voilà la différence que fait un **Assistant Mémoire IA** — il apprend votre style, rappelle les décisions passées et fournit des réponses personnalisées sans que vous ayez à vous répéter.
+
+### Que peut-il faire d'autre ?
+
+| | Ce que vous obtenez |
+|---|---|
+| **Capture automatique** | Votre agent apprend de chaque conversation — pas besoin de `memory_store` manuel |
+| **Extraction intelligente** | Classification LLM en 6 catégories : profils, préférences, entités, événements, cas, patterns |
+| **Oubli intelligent** | Modèle de décroissance Weibull — les souvenirs importants restent, le bruit s'estompe |
+| **Recherche hybride** | Recherche vectorielle + BM25 plein texte, fusionnée avec un reranking cross-encoder |
+| **Injection de contexte** | Les souvenirs pertinents remontent automatiquement avant chaque réponse |
+| **Isolation multi-scope** | Limites mémoire par agent, par utilisateur, par projet |
+| **Tout fournisseur** | OpenAI, Jina, Gemini, Ollama ou toute API compatible OpenAI |
+| **Boîte à outils complète** | CLI, sauvegarde, migration, mise à niveau, export/import — prêt pour la production |
+
+---
+
+## Démarrage rapide
+
+### Option A : Script d'installation en un clic (recommandé)
+
+Le **[script d'installation](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)** maintenu par la communauté gère l'installation, la mise à niveau et la réparation en une seule commande :
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
+bash setup-memory.sh
+```
+
+> Consultez [Écosystème](#écosystème) ci-dessous pour la liste complète des scénarios couverts et les autres outils communautaires.
+
+### Option B : Installation manuelle
+
+**Via OpenClaw CLI (recommandé) :**
+```bash
+openclaw plugins install memory-lancedb-pro@beta
+```
+
+**Ou via npm :**
+```bash
+npm i memory-lancedb-pro@beta
+```
+> Si vous utilisez npm, vous devrez également ajouter le répertoire d'installation du plugin comme chemin **absolu** dans `plugins.load.paths` de votre `openclaw.json`. C'est le problème de configuration le plus courant.
+
+Ajoutez à votre `openclaw.json` :
+
+```json
+{
+ "plugins": {
+ "slots": { "memory": "memory-lancedb-pro" },
+ "entries": {
+ "memory-lancedb-pro": {
+ "enabled": true,
+ "config": {
+ "embedding": {
+ "provider": "openai-compatible",
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "text-embedding-3-small"
+ },
+ "autoCapture": true,
+ "autoRecall": true,
+ "smartExtraction": true,
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000,
+ "sessionMemory": { "enabled": false }
+ }
+ }
+ }
+ }
+}
+```
+
+**Pourquoi ces valeurs par défaut ?**
+- `autoCapture` + `smartExtraction` → votre agent apprend automatiquement de chaque conversation
+- `autoRecall` → les souvenirs pertinents sont injectés avant chaque réponse
+- `extractMinMessages: 2` → l'extraction se déclenche dans les conversations normales à deux tours
+- `sessionMemory.enabled: false` → évite de polluer la recherche avec des résumés de session au début
+
+Validez et redémarrez :
+
+```bash
+openclaw config validate
+openclaw gateway restart
+openclaw logs --follow --plain | grep "memory-lancedb-pro"
+```
+
+Vous devriez voir :
+- `memory-lancedb-pro: smart extraction enabled`
+- `memory-lancedb-pro@...: plugin registered`
+
+Terminé ! Votre agent dispose maintenant d'une mémoire long terme.
+
+
+Plus de chemins d'installation (utilisateurs existants, mises à niveau)
+
+**Déjà utilisateur d'OpenClaw ?**
+
+1. Ajoutez le plugin avec un chemin **absolu** dans `plugins.load.paths`
+2. Liez le slot mémoire : `plugins.slots.memory = "memory-lancedb-pro"`
+3. Vérifiez : `openclaw plugins info memory-lancedb-pro && openclaw memory-pro stats`
+
+**Mise à niveau depuis une version antérieure à v1.1.0 ?**
+
+```bash
+# 1) Sauvegarde
+openclaw memory-pro export --scope global --output memories-backup.json
+# 2) Simulation
+openclaw memory-pro upgrade --dry-run
+# 3) Exécution de la mise à niveau
+openclaw memory-pro upgrade
+# 4) Vérification
+openclaw memory-pro stats
+```
+
+Consultez `CHANGELOG-v1.1.0.md` pour les changements de comportement et la justification de la mise à niveau.
+
+
+
+
+Import rapide Telegram Bot (cliquez pour développer)
+
+Si vous utilisez l'intégration Telegram d'OpenClaw, le plus simple est d'envoyer une commande d'import directement au Bot principal au lieu de modifier manuellement la configuration.
+
+Envoyez ce message :
+
+```text
+Help me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro
+
+Requirements:
+1. Set it as the only active memory plugin
+2. Use Jina for embedding
+3. Use Jina for reranker
+4. Use gpt-4o-mini for the smart-extraction LLM
+5. Enable autoCapture, autoRecall, smartExtraction
+6. extractMinMessages=2
+7. sessionMemory.enabled=false
+8. captureAssistant=false
+9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
+10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
+11. Generate the final openclaw.json config directly, not just an explanation
+```
+
+
+
+---
+
+## Écosystème
+
+memory-lancedb-pro est le plugin principal. La communauté a construit des outils autour pour faciliter l'installation et l'utilisation quotidienne :
+
+### Script d'installation — Installation, mise à niveau et réparation en un clic
+
+> **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**
+
+Pas un simple installateur — le script gère intelligemment de nombreux scénarios réels :
+
+| Votre situation | Ce que fait le script |
+|---|---|
+| Jamais installé | Téléchargement → installation des dépendances → choix de la config → écriture dans openclaw.json → redémarrage |
+| Installé via `git clone`, bloqué sur un ancien commit | `git fetch` + `checkout` automatique vers la dernière version → réinstallation des dépendances → vérification |
+| La config contient des champs invalides | Détection automatique via filtre de schéma, suppression des champs non supportés |
+| Installé via `npm` | Saute la mise à jour git, rappelle d'exécuter `npm update` soi-même |
+| CLI `openclaw` cassé à cause d'une config invalide | Solution de repli : lecture directe du chemin workspace depuis le fichier `openclaw.json` |
+| `extensions/` au lieu de `plugins/` | Détection automatique de l'emplacement du plugin depuis la config ou le système de fichiers |
+| Déjà à jour | Exécution des vérifications de santé uniquement, aucune modification |
+
+```bash
+bash setup-memory.sh # Installer ou mettre à niveau
+bash setup-memory.sh --dry-run # Aperçu uniquement
+bash setup-memory.sh --beta # Inclure les versions pré-release
+bash setup-memory.sh --uninstall # Restaurer la config et supprimer le plugin
+```
+
+Presets de fournisseurs intégrés : **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, ou apportez votre propre API compatible OpenAI. Pour l'utilisation complète (incluant `--ref`, `--selfcheck-only`, etc.), consultez le [README du script d'installation](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup).
+
+### Claude Code / OpenClaw Skill — Configuration guidée par IA
+
+> **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**
+
+Installez ce Skill et votre agent IA (Claude Code ou OpenClaw) acquiert une connaissance approfondie de toutes les fonctionnalités de memory-lancedb-pro. Dites simplement **« aide-moi à activer la meilleure config »** et obtenez :
+
+- **Workflow de configuration guidé en 7 étapes** avec 4 plans de déploiement :
+ - Full Power (Jina + OpenAI) / Budget (reranker SiliconFlow gratuit) / Simple (OpenAI uniquement) / Entièrement local (Ollama, zéro coût API)
+- **Les 9 outils MCP** utilisés correctement : `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(l'ensemble complet nécessite `enableManagementTools: true` — la config Quick Start par défaut expose les 4 outils principaux)*
+- **Évitement des pièges courants** : activation du plugin workspace, `autoRecall` par défaut à false, cache jiti, variables d'environnement, isolation des scopes, etc.
+
+**Installation pour Claude Code :**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro
+```
+
+**Installation pour OpenClaw :**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill
+```
+
+---
+
+## Tutoriel vidéo
+
+> Présentation complète : installation, configuration et fonctionnement interne de la recherche hybride.
+
+[](https://youtu.be/MtukF1C8epQ)
+**https://youtu.be/MtukF1C8epQ**
+
+[](https://www.bilibili.com/video/BV1zUf2BGEgn/)
+**https://www.bilibili.com/video/BV1zUf2BGEgn/**
+
+---
+
+## Architecture
+
+```
+┌─────────────────────────────────────────────────────────┐
+│ index.ts (Point d'entrée) │
+│ Enregistrement du plugin · Parsing config · Hooks │
+└────────┬──────────┬──────────┬──────────┬───────────────┘
+ │ │ │ │
+ ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐
+ │ store │ │embedder│ │retriever│ │ scopes │
+ │ .ts │ │ .ts │ │ .ts │ │ .ts │
+ └────────┘ └────────┘ └────────┘ └─────────────┘
+ │ │
+ ┌────▼───┐ ┌─────▼──────────┐
+ │migrate │ │noise-filter.ts │
+ │ .ts │ │adaptive- │
+ └────────┘ │retrieval.ts │
+ └────────────────┘
+ ┌─────────────┐ ┌──────────┐
+ │ tools.ts │ │ cli.ts │
+ │ (API Agent) │ │ (CLI) │
+ └─────────────┘ └──────────┘
+```
+
+> Pour une analyse approfondie de l'architecture complète, consultez [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md).
+
+
+Référence des fichiers (cliquez pour développer)
+
+| Fichier | Rôle |
+| --- | --- |
+| `index.ts` | Point d'entrée du plugin. S'enregistre auprès de l'API Plugin OpenClaw, parse la config, monte les hooks de cycle de vie |
+| `openclaw.plugin.json` | Métadonnées du plugin + déclaration complète du JSON Schema de config |
+| `cli.ts` | Commandes CLI : `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |
+| `src/store.ts` | Couche de stockage LanceDB. Création de tables / Indexation FTS / Recherche vectorielle / Recherche BM25 / CRUD |
+| `src/embedder.ts` | Abstraction d'embedding. Compatible avec tout fournisseur API compatible OpenAI |
+| `src/retriever.ts` | Moteur de recherche hybride. Vectoriel + BM25 → Fusion hybride → Rerank → Décroissance cycle de vie → Filtre |
+| `src/scopes.ts` | Contrôle d'accès multi-scope |
+| `src/tools.ts` | Définitions des outils agent : `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + outils de gestion |
+| `src/noise-filter.ts` | Filtre les refus d'agent, les méta-questions, les salutations et le contenu de faible qualité |
+| `src/adaptive-retrieval.ts` | Détermine si une requête nécessite une recherche en mémoire |
+| `src/migrate.ts` | Migration depuis `memory-lancedb` intégré vers Pro |
+| `src/smart-extractor.ts` | Extraction LLM en 6 catégories avec stockage L0/L1/L2 et déduplication en deux étapes |
+| `src/decay-engine.ts` | Modèle de décroissance exponentielle étirée Weibull |
+| `src/tier-manager.ts` | Promotion/rétrogradation à trois niveaux : Périphérique ↔ Travail ↔ Noyau |
+
+
+
+---
+
+## Fonctionnalités principales
+
+### Recherche hybride
+
+```
+Requête → embedQuery() ─┐
+ ├─→ Fusion hybride → Rerank → Boost décroissance → Normalisation longueur → Filtre
+Requête → BM25 FTS ─────┘
+```
+
+- **Recherche vectorielle** — similarité sémantique via LanceDB ANN (distance cosinus)
+- **Recherche plein texte BM25** — correspondance exacte de mots-clés via l'index FTS de LanceDB
+- **Fusion hybride** — score vectoriel comme base, les résultats BM25 reçoivent un boost pondéré (pas du RRF standard — optimisé pour la qualité de rappel réelle)
+- **Poids configurables** — `vectorWeight`, `bm25Weight`, `minScore`
+
+### Reranking Cross-Encoder
+
+- Adaptateurs intégrés pour **Jina**, **SiliconFlow**, **Voyage AI** et **Pinecone**
+- Compatible avec tout endpoint compatible Jina (ex. Hugging Face TEI, DashScope)
+- Scoring hybride : 60% cross-encoder + 40% score fusionné original
+- Dégradation gracieuse : repli sur la similarité cosinus en cas d'échec API
+
+### Pipeline de scoring multi-étapes
+
+| Étape | Effet |
+| --- | --- |
+| **Fusion hybride** | Combine rappel sémantique et correspondance exacte |
+| **Rerank Cross-Encoder** | Promeut les résultats sémantiquement précis |
+| **Boost décroissance cycle de vie** | Fraîcheur Weibull + fréquence d'accès + importance × confiance |
+| **Normalisation de longueur** | Empêche les entrées longues de dominer (ancre : 500 caractères) |
+| **Score minimum dur** | Supprime les résultats non pertinents (par défaut : 0.35) |
+| **Diversité MMR** | Similarité cosinus > 0.85 → rétrogradé |
+
+### Extraction mémoire intelligente (v1.1.0)
+
+- **Extraction LLM en 6 catégories** : profil, préférences, entités, événements, cas, patterns
+- **Stockage par couches L0/L1/L2** : L0 (index en une phrase) → L1 (résumé structuré) → L2 (récit complet)
+- **Déduplication en deux étapes** : pré-filtre de similarité vectorielle (≥0.7) → décision sémantique LLM (CREATE/MERGE/SKIP)
+- **Fusion sensible aux catégories** : `profile` fusionne toujours, `events`/`cases` en ajout uniquement
+
+### Gestion du cycle de vie mémoire (v1.1.0)
+
+- **Moteur de décroissance Weibull** : score composite = fraîcheur + fréquence + valeur intrinsèque
+- **Promotion à trois niveaux** : `Périphérique ↔ Travail ↔ Noyau` avec seuils configurables
+- **Renforcement par accès** : les souvenirs fréquemment rappelés décroissent plus lentement (style répétition espacée)
+- **Demi-vie modulée par l'importance** : les souvenirs importants décroissent plus lentement
+
+### Isolation multi-scope
+
+- Scopes intégrés : `global`, `agent:`, `custom:`, `project:`, `user:`
+- Contrôle d'accès au niveau agent via `scopes.agentAccess`
+- Par défaut : chaque agent accède à `global` + son propre scope `agent:`
+
+### Capture automatique et rappel automatique
+
+- **Capture auto** (`agent_end`) : extrait préférences/faits/décisions/entités des conversations, déduplique, stocke jusqu'à 3 par tour
+- **Rappel auto** (`before_agent_start`) : injecte le contexte `` (jusqu'à 3 entrées)
+
+### Filtrage du bruit et recherche adaptative
+
+- Filtre le contenu de faible qualité : refus d'agent, méta-questions, salutations
+- Ignore la recherche pour : salutations, commandes slash, confirmations simples, emoji
+- Force la recherche pour les mots-clés mémoire (« souviens-toi », « précédemment », « la dernière fois »)
+- Seuils CJK (chinois : 6 caractères vs anglais : 15 caractères)
+
+---
+
+
+Comparaison avec memory-lancedb intégré (cliquez pour développer)
+
+| Fonctionnalité | `memory-lancedb` intégré | **memory-lancedb-pro** |
+| --- | :---: | :---: |
+| Recherche vectorielle | Oui | Oui |
+| Recherche plein texte BM25 | - | Oui |
+| Fusion hybride (Vectoriel + BM25) | - | Oui |
+| Rerank cross-encoder (multi-fournisseur) | - | Oui |
+| Boost de fraîcheur et décroissance temporelle | - | Oui |
+| Normalisation de longueur | - | Oui |
+| Diversité MMR | - | Oui |
+| Isolation multi-scope | - | Oui |
+| Filtrage du bruit | - | Oui |
+| Recherche adaptative | - | Oui |
+| CLI de gestion | - | Oui |
+| Mémoire de session | - | Oui |
+| Embeddings sensibles aux tâches | - | Oui |
+| **Extraction intelligente LLM (6 catégories)** | - | Oui (v1.1.0) |
+| **Décroissance Weibull + Promotion par niveaux** | - | Oui (v1.1.0) |
+| Tout embedding compatible OpenAI | Limité | Oui |
+
+
+
+---
+
+## Configuration
+
+
+Exemple de configuration complète
+
+```json
+{
+ "embedding": {
+ "apiKey": "${JINA_API_KEY}",
+ "model": "jina-embeddings-v5-text-small",
+ "baseURL": "https://api.jina.ai/v1",
+ "dimensions": 1024,
+ "taskQuery": "retrieval.query",
+ "taskPassage": "retrieval.passage",
+ "normalized": true
+ },
+ "dbPath": "~/.openclaw/memory/lancedb-pro",
+ "autoCapture": true,
+ "autoRecall": true,
+ "retrieval": {
+ "mode": "hybrid",
+ "vectorWeight": 0.7,
+ "bm25Weight": 0.3,
+ "minScore": 0.3,
+ "rerank": "cross-encoder",
+ "rerankApiKey": "${JINA_API_KEY}",
+ "rerankModel": "jina-reranker-v3",
+ "rerankEndpoint": "https://api.jina.ai/v1/rerank",
+ "rerankProvider": "jina",
+ "candidatePoolSize": 20,
+ "recencyHalfLifeDays": 14,
+ "recencyWeight": 0.1,
+ "filterNoise": true,
+ "lengthNormAnchor": 500,
+ "hardMinScore": 0.35,
+ "timeDecayHalfLifeDays": 60,
+ "reinforcementFactor": 0.5,
+ "maxHalfLifeMultiplier": 3
+ },
+ "enableManagementTools": false,
+ "scopes": {
+ "default": "global",
+ "definitions": {
+ "global": { "description": "Shared knowledge" },
+ "agent:discord-bot": { "description": "Discord bot private" }
+ },
+ "agentAccess": {
+ "discord-bot": ["global", "agent:discord-bot"]
+ }
+ },
+ "sessionMemory": {
+ "enabled": false,
+ "messageCount": 15
+ },
+ "smartExtraction": true,
+ "llm": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "gpt-4o-mini",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000
+}
+```
+
+
+
+
+Fournisseurs d'embedding
+
+Fonctionne avec **toute API d'embedding compatible OpenAI** :
+
+| Fournisseur | Modèle | Base URL | Dimensions |
+| --- | --- | --- | --- |
+| **Jina** (recommandé) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |
+| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |
+| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |
+| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |
+| **Ollama** (local) | `nomic-embed-text` | `http://localhost:11434/v1` | selon le modèle |
+
+
+
+
+Fournisseurs de reranking
+
+Le reranking cross-encoder supporte plusieurs fournisseurs via `rerankProvider` :
+
+| Fournisseur | `rerankProvider` | Modèle exemple |
+| --- | --- | --- |
+| **Jina** (par défaut) | `jina` | `jina-reranker-v3` |
+| **SiliconFlow** (niveau gratuit disponible) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |
+| **Voyage AI** | `voyage` | `rerank-2.5` |
+| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |
+
+Tout endpoint de reranking compatible Jina fonctionne également — définissez `rerankProvider: "jina"` et pointez `rerankEndpoint` vers votre service (ex. Hugging Face TEI, DashScope `qwen3-rerank`).
+
+
+
+
+Extraction intelligente (LLM) — v1.1.0
+
+Quand `smartExtraction` est activé (par défaut : `true`), le plugin utilise un LLM pour extraire et classifier intelligemment les souvenirs au lieu de déclencheurs basés sur des regex.
+
+| Champ | Type | Défaut | Description |
+|-------|------|---------|-------------|
+| `smartExtraction` | boolean | `true` | Activer/désactiver l'extraction LLM en 6 catégories |
+| `llm.auth` | string | `api-key` | `api-key` utilise `llm.apiKey` / `embedding.apiKey` ; `oauth` utilise un fichier token OAuth au niveau plugin |
+| `llm.apiKey` | string | *(repli sur `embedding.apiKey`)* | Clé API pour le fournisseur LLM |
+| `llm.model` | string | `openai/gpt-oss-120b` | Nom du modèle LLM |
+| `llm.baseURL` | string | *(repli sur `embedding.baseURL`)* | Point de terminaison API LLM |
+| `llm.oauthProvider` | string | `openai-codex` | ID du fournisseur OAuth utilisé quand `llm.auth` est `oauth` |
+| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | Fichier token OAuth utilisé quand `llm.auth` est `oauth` |
+| `llm.timeoutMs` | number | `30000` | Timeout des requêtes LLM en millisecondes |
+| `extractMinMessages` | number | `2` | Nombre minimum de messages avant le déclenchement de l'extraction |
+| `extractMaxChars` | number | `8000` | Nombre maximum de caractères envoyés au LLM |
+
+
+OAuth `llm` config (utiliser le cache de connexion Codex / ChatGPT existant pour les appels LLM) :
+```json
+{
+ "llm": {
+ "auth": "oauth",
+ "oauthProvider": "openai-codex",
+ "model": "gpt-5.4",
+ "oauthPath": "${HOME}/.openclaw/.memory-lancedb-pro/oauth.json",
+ "timeoutMs": 30000
+ }
+}
+```
+
+Notes pour `llm.auth: "oauth"` :
+
+- `llm.oauthProvider` est actuellement `openai-codex`.
+- Les tokens OAuth sont stockés par défaut dans `~/.openclaw/.memory-lancedb-pro/oauth.json`.
+- Vous pouvez définir `llm.oauthPath` si vous souhaitez stocker ce fichier ailleurs.
+- `auth login` sauvegarde la configuration `llm` api-key précédente à côté du fichier OAuth, et `auth logout` restaure cette sauvegarde lorsqu'elle est disponible.
+- Passer de `api-key` à `oauth` ne transfère pas automatiquement `llm.baseURL`. Définissez-le manuellement en mode OAuth uniquement si vous souhaitez intentionnellement un backend personnalisé compatible ChatGPT/Codex.
+
+
+
+
+Configuration du cycle de vie (Décroissance + Niveaux)
+
+| Champ | Défaut | Description |
+|-------|---------|-------------|
+| `decay.recencyHalfLifeDays` | `30` | Demi-vie de base pour la décroissance Weibull |
+| `decay.frequencyWeight` | `0.3` | Poids de la fréquence d'accès dans le score composite |
+| `decay.intrinsicWeight` | `0.3` | Poids de `importance × confiance` |
+| `decay.betaCore` | `0.8` | Beta Weibull pour les souvenirs `noyau` |
+| `decay.betaWorking` | `1.0` | Beta Weibull pour les souvenirs `travail` |
+| `decay.betaPeripheral` | `1.3` | Beta Weibull pour les souvenirs `périphériques` |
+| `tier.coreAccessThreshold` | `10` | Nombre minimum de rappels avant promotion en `noyau` |
+| `tier.peripheralAgeDays` | `60` | Seuil d'âge pour la rétrogradation des souvenirs obsolètes |
+
+
+
+
+Renforcement par accès
+
+Les souvenirs fréquemment rappelés décroissent plus lentement (style répétition espacée).
+
+Clés de config (sous `retrieval`) :
+- `reinforcementFactor` (0-2, défaut : `0.5`) — mettre à `0` pour désactiver
+- `maxHalfLifeMultiplier` (1-10, défaut : `3`) — plafond de la demi-vie effective
+
+
+
+---
+
+## Commandes CLI
+
+```bash
+openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]
+openclaw memory-pro search "requête" [--scope global] [--limit 10] [--json]
+openclaw memory-pro stats [--scope global] [--json]
+openclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]
+openclaw memory-pro auth status
+openclaw memory-pro auth logout
+openclaw memory-pro delete
+openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
+openclaw memory-pro export [--scope global] [--output memories.json]
+openclaw memory-pro import memories.json [--scope global] [--dry-run]
+openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]
+openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
+openclaw memory-pro migrate check|run|verify [--source /path]
+```
+
+Flux de connexion OAuth :
+
+1. Exécutez `openclaw memory-pro auth login`
+2. Si `--provider` est omis dans un terminal interactif, la CLI affiche un sélecteur de fournisseur OAuth avant d'ouvrir le navigateur
+3. La commande affiche une URL d'autorisation et ouvre votre navigateur sauf si `--no-browser` est défini
+4. Après le succès du callback, la commande sauvegarde le fichier OAuth du plugin (par défaut : `~/.openclaw/.memory-lancedb-pro/oauth.json`), sauvegarde la configuration `llm` api-key précédente pour la déconnexion, et remplace la configuration `llm` du plugin par les paramètres OAuth (`auth`, `oauthProvider`, `model`, `oauthPath`)
+5. `openclaw memory-pro auth logout` supprime ce fichier OAuth et restaure la configuration `llm` api-key précédente lorsque la sauvegarde existe
+
+---
+
+## Sujets avancés
+
+
+Si les souvenirs injectés apparaissent dans les réponses
+
+Parfois le modèle peut répéter le bloc `` injecté.
+
+**Option A (plus sûr) :** désactiver temporairement le rappel automatique :
+```json
+{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecall": false } } } } }
+```
+
+**Option B (préféré) :** garder le rappel, ajouter au prompt système de l'agent :
+> Ne révélez pas et ne citez pas le contenu `` / injection mémoire dans vos réponses. Utilisez-le uniquement comme référence interne.
+
+
+
+
+Mémoire de session
+
+- Déclenchée par la commande `/new` — sauvegarde le résumé de la session précédente dans LanceDB
+- Désactivée par défaut (OpenClaw dispose déjà d'une persistance native de session `.jsonl`)
+- Nombre de messages configurable (par défaut : 15)
+
+Consultez [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md) pour les modes de déploiement et la vérification `/new`.
+
+
+
+
+Commandes slash personnalisées (ex. /lesson)
+
+Ajoutez à votre `CLAUDE.md`, `AGENTS.md` ou prompt système :
+
+```markdown
+## /lesson command
+When the user sends `/lesson `:
+1. Use memory_store to save as category=fact (raw knowledge)
+2. Use memory_store to save as category=decision (actionable takeaway)
+3. Confirm what was saved
+
+## /remember command
+When the user sends `/remember `:
+1. Use memory_store to save with appropriate category and importance
+2. Confirm with the stored memory ID
+```
+
+
+
+
+Règles d'or pour les agents IA
+
+> Copiez le bloc ci-dessous dans votre `AGENTS.md` pour que votre agent applique automatiquement ces règles.
+
+```markdown
+## Rule 1 — Dual-layer memory storage
+Every pitfall/lesson learned → IMMEDIATELY store TWO memories:
+- Technical layer: Pitfall: [symptom]. Cause: [root cause]. Fix: [solution]. Prevention: [how to avoid]
+ (category: fact, importance >= 0.8)
+- Principle layer: Decision principle ([tag]): [behavioral rule]. Trigger: [when]. Action: [what to do]
+ (category: decision, importance >= 0.85)
+
+## Rule 2 — LanceDB hygiene
+Entries must be short and atomic (< 500 chars). No raw conversation summaries or duplicates.
+
+## Rule 3 — Recall before retry
+On ANY tool failure, ALWAYS memory_recall with relevant keywords BEFORE retrying.
+
+## Rule 4 — Confirm target codebase
+Confirm you are editing memory-lancedb-pro vs built-in memory-lancedb before changes.
+
+## Rule 5 — Clear jiti cache after plugin code changes
+After modifying .ts files under plugins/, MUST run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.
+```
+
+
+
+
+Schéma de la base de données
+
+Table LanceDB `memories` :
+
+| Champ | Type | Description |
+| --- | --- | --- |
+| `id` | string (UUID) | Clé primaire |
+| `text` | string | Texte du souvenir (indexé FTS) |
+| `vector` | float[] | Vecteur d'embedding |
+| `category` | string | Catégorie de stockage : `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |
+| `scope` | string | Identifiant de scope (ex. `global`, `agent:main`) |
+| `importance` | float | Score d'importance 0-1 |
+| `timestamp` | int64 | Horodatage de création (ms) |
+| `metadata` | string (JSON) | Métadonnées étendues |
+
+Clés `metadata` courantes en v1.1.0 : `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`
+
+> **Note sur les catégories :** Le champ `category` de niveau supérieur utilise 6 catégories de stockage. Les 6 labels sémantiques de l'Extraction Intelligente (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`) sont stockés dans `metadata.memory_category`.
+
+
+
+
+Dépannage
+
+### "Cannot mix BigInt and other types" (LanceDB / Apache Arrow)
+
+Avec LanceDB 0.26+, certaines colonnes numériques peuvent être retournées en `BigInt`. Mettez à niveau vers **memory-lancedb-pro >= 1.0.14** — ce plugin convertit maintenant les valeurs avec `Number(...)` avant les opérations arithmétiques.
+
+
+
+---
+
+## Documentation
+
+| Document | Description |
+| --- | --- |
+| [Playbook d'intégration OpenClaw](docs/openclaw-integration-playbook.md) | Modes de déploiement, vérification, matrice de régression |
+| [Analyse de l'architecture mémoire](docs/memory_architecture_analysis.md) | Analyse approfondie de l'architecture complète |
+| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | Changements de comportement v1.1.0 et justification de la mise à niveau |
+| [Chunking long contexte](docs/long-context-chunking.md) | Stratégie de chunking pour les longs documents |
+
+---
+
+## Beta : Smart Memory v1.1.0
+
+> Statut : Beta — disponible via `npm i memory-lancedb-pro@beta`. Les utilisateurs stables sur `latest` ne sont pas affectés.
+
+| Fonctionnalité | Description |
+|---------|-------------|
+| **Extraction intelligente** | Extraction LLM en 6 catégories avec métadonnées L0/L1/L2. Repli sur regex si désactivé. |
+| **Scoring du cycle de vie** | Décroissance Weibull intégrée à la recherche — les souvenirs fréquents et importants sont mieux classés. |
+| **Gestion des niveaux** | Système à trois niveaux (Noyau → Travail → Périphérique) avec promotion/rétrogradation automatique. |
+
+Retours : [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · Retour en arrière : `npm i memory-lancedb-pro@latest`
+
+---
+
+## Dépendances
+
+| Package | Rôle |
+| --- | --- |
+| `@lancedb/lancedb` ≥0.26.2 | Base de données vectorielle (ANN + FTS) |
+| `openai` ≥6.21.0 | Client API d'embedding compatible OpenAI |
+| `@sinclair/typebox` 0.34.48 | Définitions de types JSON Schema |
+
+---
+
+## Contributors
+
+
+
+
+
+
+
+
+
+
+
+
+
+Full list: [Contributors](https://github.com/CortexReach/memory-lancedb-pro/graphs/contributors)
+
+## Star History
+
+
+
+
+
+
+
+
+
+## Licence
+
+MIT
+
+---
+
+## Mon QR Code WeChat
+
+
diff --git a/README_IT.md b/README_IT.md
new file mode 100644
index 00000000..b1679682
--- /dev/null
+++ b/README_IT.md
@@ -0,0 +1,773 @@
+
+
+# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin
+
+**Assistente Memoria IA per Agenti [OpenClaw](https://github.com/openclaw/openclaw)**
+
+*Dai al tuo agente IA un cervello che ricorda davvero — tra sessioni, tra agenti, nel tempo.*
+
+Un plugin di memoria a lungo termine per OpenClaw basato su LanceDB che memorizza preferenze, decisioni e contesto di progetto, e li richiama automaticamente nelle sessioni future.
+
+[](https://github.com/openclaw/openclaw)
+[](https://www.npmjs.com/package/memory-lancedb-pro)
+[](https://lancedb.com)
+[](LICENSE)
+
+[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)
+
+
+
+---
+
+## Perché memory-lancedb-pro?
+
+La maggior parte degli agenti IA soffre di amnesia. Dimenticano tutto nel momento in cui si avvia una nuova chat.
+
+**memory-lancedb-pro** è un plugin di memoria a lungo termine di livello produttivo per OpenClaw che trasforma il tuo agente in un vero **Assistente Memoria IA** — cattura automaticamente ciò che conta, lascia il rumore dissolversi naturalmente e recupera il ricordo giusto al momento giusto. Nessun tag manuale, nessuna configurazione complicata.
+
+### Il tuo Assistente Memoria IA in azione
+
+**Senza memoria — ogni sessione parte da zero:**
+
+> **Tu:** "Usa i tab per l'indentazione, aggiungi sempre la gestione degli errori."
+> *(sessione successiva)*
+> **Tu:** "Te l'ho già detto — tab, non spazi!" 😤
+> *(sessione successiva)*
+> **Tu:** "…sul serio, tab. E gestione degli errori. Di nuovo."
+
+**Con memory-lancedb-pro — il tuo agente impara e ricorda:**
+
+> **Tu:** "Usa i tab per l'indentazione, aggiungi sempre la gestione degli errori."
+> *(sessione successiva — l'agente richiama automaticamente le tue preferenze)*
+> **Agente:** *(applica silenziosamente tab + gestione errori)* ✅
+> **Tu:** "Perché il mese scorso abbiamo scelto PostgreSQL invece di MongoDB?"
+> **Agente:** "In base alla nostra discussione del 12 febbraio, i motivi principali erano…" ✅
+
+Questa è la differenza che fa un **Assistente Memoria IA** — impara il tuo stile, ricorda le decisioni passate e fornisce risposte personalizzate senza che tu debba ripeterti.
+
+### Cos'altro può fare?
+
+| | Cosa ottieni |
+|---|---|
+| **Auto-Capture** | Il tuo agente impara da ogni conversazione — nessun `memory_store` manuale necessario |
+| **Estrazione intelligente** | Classificazione LLM in 6 categorie: profili, preferenze, entità, eventi, casi, pattern |
+| **Oblio intelligente** | Modello di decadimento Weibull — i ricordi importanti restano, il rumore svanisce |
+| **Ricerca ibrida** | Ricerca vettoriale + BM25 full-text, fusa con reranking cross-encoder |
+| **Iniezione di contesto** | I ricordi rilevanti emergono automaticamente prima di ogni risposta |
+| **Isolamento multi-scope** | Confini di memoria per agente, per utente, per progetto |
+| **Qualsiasi provider** | OpenAI, Jina, Gemini, Ollama o qualsiasi API compatibile OpenAI |
+| **Toolkit completo** | CLI, backup, migrazione, upgrade, esportazione/importazione — pronto per la produzione |
+
+---
+
+## Avvio rapido
+
+### Opzione A: Script di installazione con un clic (consigliato)
+
+Lo **[script di installazione](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)** mantenuto dalla community gestisce installazione, aggiornamento e riparazione in un solo comando:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
+bash setup-memory.sh
+```
+
+> Vedi [Ecosistema](#ecosistema) qui sotto per l'elenco completo degli scenari coperti e altri strumenti della community.
+
+### Opzione B: Installazione manuale
+
+**Tramite OpenClaw CLI (consigliato):**
+```bash
+openclaw plugins install memory-lancedb-pro@beta
+```
+
+**Oppure tramite npm:**
+```bash
+npm i memory-lancedb-pro@beta
+```
+> Se usi npm, dovrai anche aggiungere la directory di installazione del plugin come percorso **assoluto** in `plugins.load.paths` nel tuo `openclaw.json`. Questo è il problema di configurazione più comune.
+
+Aggiungi al tuo `openclaw.json`:
+
+```json
+{
+ "plugins": {
+ "slots": { "memory": "memory-lancedb-pro" },
+ "entries": {
+ "memory-lancedb-pro": {
+ "enabled": true,
+ "config": {
+ "embedding": {
+ "provider": "openai-compatible",
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "text-embedding-3-small"
+ },
+ "autoCapture": true,
+ "autoRecall": true,
+ "smartExtraction": true,
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000,
+ "sessionMemory": { "enabled": false }
+ }
+ }
+ }
+ }
+}
+```
+
+**Perché questi valori predefiniti?**
+- `autoCapture` + `smartExtraction` → il tuo agente impara automaticamente da ogni conversazione
+- `autoRecall` → i ricordi rilevanti vengono iniettati prima di ogni risposta
+- `extractMinMessages: 2` → l'estrazione si attiva nelle normali chat a due turni
+- `sessionMemory.enabled: false` → evita di inquinare la ricerca con riassunti di sessione all'inizio
+
+Valida e riavvia:
+
+```bash
+openclaw config validate
+openclaw gateway restart
+openclaw logs --follow --plain | grep "memory-lancedb-pro"
+```
+
+Dovresti vedere:
+- `memory-lancedb-pro: smart extraction enabled`
+- `memory-lancedb-pro@...: plugin registered`
+
+Fatto! Il tuo agente ora ha una memoria a lungo termine.
+
+
+Ulteriori percorsi di installazione (utenti esistenti, aggiornamenti)
+
+**Usi già OpenClaw?**
+
+1. Aggiungi il plugin con un percorso **assoluto** in `plugins.load.paths`
+2. Associa lo slot di memoria: `plugins.slots.memory = "memory-lancedb-pro"`
+3. Verifica: `openclaw plugins info memory-lancedb-pro && openclaw memory-pro stats`
+
+**Aggiornamento da versioni precedenti alla v1.1.0?**
+
+```bash
+# 1) Backup
+openclaw memory-pro export --scope global --output memories-backup.json
+# 2) Dry run
+openclaw memory-pro upgrade --dry-run
+# 3) Run upgrade
+openclaw memory-pro upgrade
+# 4) Verify
+openclaw memory-pro stats
+```
+
+Vedi `CHANGELOG-v1.1.0.md` per le modifiche comportamentali e le motivazioni dell'aggiornamento.
+
+
+
+
+Importazione rapida Telegram Bot (clicca per espandere)
+
+Se stai usando l'integrazione Telegram di OpenClaw, il modo più semplice è inviare un comando di importazione direttamente al Bot principale invece di modificare manualmente la configurazione.
+
+Invia questo messaggio:
+
+```text
+Help me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro
+
+Requirements:
+1. Set it as the only active memory plugin
+2. Use Jina for embedding
+3. Use Jina for reranker
+4. Use gpt-4o-mini for the smart-extraction LLM
+5. Enable autoCapture, autoRecall, smartExtraction
+6. extractMinMessages=2
+7. sessionMemory.enabled=false
+8. captureAssistant=false
+9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
+10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
+11. Generate the final openclaw.json config directly, not just an explanation
+```
+
+
+
+---
+
+## Ecosistema
+
+memory-lancedb-pro è il plugin principale. La community ha costruito strumenti per rendere l'installazione e l'uso quotidiano ancora più fluidi:
+
+### Script di installazione — Installazione, aggiornamento e riparazione con un clic
+
+> **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**
+
+Non è un semplice installer — lo script gestisce in modo intelligente numerosi scenari reali:
+
+| La tua situazione | Cosa fa lo script |
+|---|---|
+| Mai installato | Download → installazione dipendenze → scelta configurazione → scrittura in openclaw.json → riavvio |
+| Installato tramite `git clone`, bloccato su un vecchio commit | `git fetch` + `checkout` automatico all'ultima versione → reinstallazione dipendenze → verifica |
+| La configurazione ha campi non validi | Rilevamento automatico tramite filtro schema, rimozione campi non supportati |
+| Installato tramite `npm` | Salta l'aggiornamento git, ricorda di eseguire `npm update` autonomamente |
+| CLI `openclaw` non funzionante per configurazione non valida | Fallback: lettura diretta del percorso workspace dal file `openclaw.json` |
+| `extensions/` invece di `plugins/` | Rilevamento automatico della posizione del plugin da configurazione o filesystem |
+| Già aggiornato | Solo controlli di integrità, nessuna modifica |
+
+```bash
+bash setup-memory.sh # Installa o aggiorna
+bash setup-memory.sh --dry-run # Solo anteprima
+bash setup-memory.sh --beta # Includi versioni pre-release
+bash setup-memory.sh --uninstall # Ripristina configurazione e rimuovi plugin
+```
+
+Preset di provider integrati: **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, oppure usa la tua API compatibile OpenAI. Per l'utilizzo completo (inclusi `--ref`, `--selfcheck-only` e altro), consulta il [README dello script di installazione](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup).
+
+### Claude Code / OpenClaw Skill — Configurazione guidata dall'IA
+
+> **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**
+
+Installa questa Skill e il tuo agente IA (Claude Code o OpenClaw) acquisisce una conoscenza approfondita di tutte le funzionalità di memory-lancedb-pro. Basta dire **"aiutami ad attivare la configurazione migliore"** per ottenere:
+
+- **Workflow di configurazione guidato in 7 passaggi** con 4 piani di distribuzione:
+ - Full Power (Jina + OpenAI) / Budget (reranker SiliconFlow gratuito) / Simple (solo OpenAI) / Completamente locale (Ollama, zero costi API)
+- **Tutti i 9 strumenti MCP** usati correttamente: `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(il set completo richiede `enableManagementTools: true` — la configurazione Quick Start predefinita espone i 4 strumenti principali)*
+- **Prevenzione delle insidie comuni**: attivazione plugin workspace, `autoRecall` predefinito a false, cache jiti, variabili d'ambiente, isolamento scope, ecc.
+
+**Installazione per Claude Code:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro
+```
+
+**Installazione per OpenClaw:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill
+```
+
+---
+
+## Tutorial video
+
+> Guida completa: installazione, configurazione e funzionamento interno della ricerca ibrida.
+
+[](https://youtu.be/MtukF1C8epQ)
+**https://youtu.be/MtukF1C8epQ**
+
+[](https://www.bilibili.com/video/BV1zUf2BGEgn/)
+**https://www.bilibili.com/video/BV1zUf2BGEgn/**
+
+---
+
+## Architettura
+
+```
+┌─────────────────────────────────────────────────────────┐
+│ index.ts (Entry Point) │
+│ Plugin Registration · Config Parsing · Lifecycle Hooks │
+└────────┬──────────┬──────────┬──────────┬───────────────┘
+ │ │ │ │
+ ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐
+ │ store │ │embedder│ │retriever│ │ scopes │
+ │ .ts │ │ .ts │ │ .ts │ │ .ts │
+ └────────┘ └────────┘ └────────┘ └─────────────┘
+ │ │
+ ┌────▼───┐ ┌─────▼──────────┐
+ │migrate │ │noise-filter.ts │
+ │ .ts │ │adaptive- │
+ └────────┘ │retrieval.ts │
+ └────────────────┘
+ ┌─────────────┐ ┌──────────┐
+ │ tools.ts │ │ cli.ts │
+ │ (Agent API) │ │ (CLI) │
+ └─────────────┘ └──────────┘
+```
+
+> Per un approfondimento sull'architettura completa, consulta [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md).
+
+
+Riferimento file (clicca per espandere)
+
+| File | Scopo |
+| --- | --- |
+| `index.ts` | Punto di ingresso del plugin. Si registra con l'API Plugin di OpenClaw, analizza la configurazione, monta gli hook del ciclo di vita |
+| `openclaw.plugin.json` | Metadati del plugin + dichiarazione completa della configurazione JSON Schema |
+| `cli.ts` | Comandi CLI: `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |
+| `src/store.ts` | Layer di storage LanceDB. Creazione tabelle / indicizzazione FTS / ricerca vettoriale / ricerca BM25 / CRUD |
+| `src/embedder.ts` | Astrazione embedding. Compatibile con qualsiasi provider API compatibile OpenAI |
+| `src/retriever.ts` | Motore di ricerca ibrido. Vettoriale + BM25 → Fusione ibrida → Rerank → Decadimento ciclo di vita → Filtro |
+| `src/scopes.ts` | Controllo accessi multi-scope |
+| `src/tools.ts` | Definizioni degli strumenti agente: `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + strumenti di gestione |
+| `src/noise-filter.ts` | Filtra rifiuti dell'agente, meta-domande, saluti e contenuti di bassa qualità |
+| `src/adaptive-retrieval.ts` | Determina se una query necessita di ricerca nella memoria |
+| `src/migrate.ts` | Migrazione dal `memory-lancedb` integrato a Pro |
+| `src/smart-extractor.ts` | Estrazione LLM in 6 categorie con archiviazione a strati L0/L1/L2 e deduplicazione in due fasi |
+| `src/decay-engine.ts` | Modello di decadimento esponenziale esteso Weibull |
+| `src/tier-manager.ts` | Promozione/retrocessione a tre livelli: Peripheral ↔ Working ↔ Core |
+
+
+
+---
+
+## Funzionalità principali
+
+### Ricerca ibrida
+
+```
+Query → embedQuery() ─┐
+ ├─→ Hybrid Fusion → Rerank → Lifecycle Decay Boost → Length Norm → Filter
+Query → BM25 FTS ─────┘
+```
+
+- **Ricerca vettoriale** — similarità semantica tramite LanceDB ANN (distanza del coseno)
+- **Ricerca full-text BM25** — corrispondenza esatta delle parole chiave tramite indice FTS di LanceDB
+- **Fusione ibrida** — punteggio vettoriale come base, i risultati BM25 ricevono un boost ponderato (non RRF standard — ottimizzato per la qualità di richiamo nel mondo reale)
+- **Pesi configurabili** — `vectorWeight`, `bm25Weight`, `minScore`
+
+### Reranking Cross-Encoder
+
+- Adattatori integrati per **Jina**, **SiliconFlow**, **Voyage AI** e **Pinecone**
+- Compatibile con qualsiasi endpoint compatibile Jina (ad es. Hugging Face TEI, DashScope)
+- Punteggio ibrido: 60% cross-encoder + 40% punteggio fuso originale
+- Degradazione elegante: fallback sulla similarità del coseno in caso di errore API
+
+### Pipeline di punteggio multi-fase
+
+| Fase | Effetto |
+| --- | --- |
+| **Fusione ibrida** | Combina richiamo semantico e corrispondenza esatta |
+| **Rerank Cross-Encoder** | Promuove risultati semanticamente precisi |
+| **Boost decadimento ciclo di vita** | Freschezza Weibull + frequenza di accesso + importance × confidence |
+| **Normalizzazione lunghezza** | Impedisce alle voci lunghe di dominare (ancora: 500 caratteri) |
+| **Punteggio minimo rigido** | Rimuove risultati irrilevanti (predefinito: 0.35) |
+| **Diversità MMR** | Similarità coseno > 0.85 → retrocesso |
+
+### Estrazione intelligente della memoria (v1.1.0)
+
+- **Estrazione LLM in 6 categorie**: profilo, preferenze, entità, eventi, casi, pattern
+- **Archiviazione a strati L0/L1/L2**: L0 (indice in una frase) → L1 (riepilogo strutturato) → L2 (narrazione completa)
+- **Deduplicazione in due fasi**: pre-filtro similarità vettoriale (≥0.7) → decisione semantica LLM (CREATE/MERGE/SKIP)
+- **Fusione consapevole delle categorie**: `profile` viene sempre fuso, `events`/`cases` solo in aggiunta
+
+### Gestione del ciclo di vita della memoria (v1.1.0)
+
+- **Motore di decadimento Weibull**: punteggio composito = freschezza + frequenza + valore intrinseco
+- **Promozione a tre livelli**: `Peripheral ↔ Working ↔ Core` con soglie configurabili
+- **Rinforzo per accesso**: i ricordi richiamati frequentemente decadono più lentamente (stile ripetizione spaziata)
+- **Emivita modulata dall'importanza**: i ricordi importanti decadono più lentamente
+
+### Isolamento multi-scope
+
+- Scope integrati: `global`, `agent:`, `custom:`, `project:`, `user:`
+- Controllo accessi a livello agente tramite `scopes.agentAccess`
+- Predefinito: ogni agente accede a `global` + il proprio scope `agent:`
+
+### Auto-Capture e Auto-Recall
+
+- **Auto-Capture** (`agent_end`): estrae preferenze/fatti/decisioni/entità dalle conversazioni, deduplica, memorizza fino a 3 per turno
+- **Auto-Recall** (`before_agent_start`): inietta il contesto `` (fino a 3 voci)
+
+### Filtraggio del rumore e ricerca adattiva
+
+- Filtra contenuti di bassa qualità: rifiuti dell'agente, meta-domande, saluti
+- Salta la ricerca per: saluti, comandi slash, conferme semplici, emoji
+- Forza la ricerca per parole chiave della memoria ("ricorda", "precedentemente", "l'ultima volta")
+- Soglie CJK (cinese: 6 caratteri vs inglese: 15 caratteri)
+
+---
+
+
+Confronto con memory-lancedb integrato (clicca per espandere)
+
+| Funzionalità | `memory-lancedb` integrato | **memory-lancedb-pro** |
+| --- | :---: | :---: |
+| Ricerca vettoriale | Sì | Sì |
+| Ricerca full-text BM25 | - | Sì |
+| Fusione ibrida (Vettoriale + BM25) | - | Sì |
+| Rerank cross-encoder (multi-provider) | - | Sì |
+| Boost di freschezza e decadimento temporale | - | Sì |
+| Normalizzazione lunghezza | - | Sì |
+| Diversità MMR | - | Sì |
+| Isolamento multi-scope | - | Sì |
+| Filtraggio del rumore | - | Sì |
+| Ricerca adattiva | - | Sì |
+| CLI di gestione | - | Sì |
+| Memoria di sessione | - | Sì |
+| Embedding task-aware | - | Sì |
+| **Estrazione intelligente LLM (6 categorie)** | - | Sì (v1.1.0) |
+| **Decadimento Weibull + promozione livelli** | - | Sì (v1.1.0) |
+| Qualsiasi embedding compatibile OpenAI | Limitato | Sì |
+
+
+
+---
+
+## Configurazione
+
+
+Esempio di configurazione completa
+
+```json
+{
+ "embedding": {
+ "apiKey": "${JINA_API_KEY}",
+ "model": "jina-embeddings-v5-text-small",
+ "baseURL": "https://api.jina.ai/v1",
+ "dimensions": 1024,
+ "taskQuery": "retrieval.query",
+ "taskPassage": "retrieval.passage",
+ "normalized": true
+ },
+ "dbPath": "~/.openclaw/memory/lancedb-pro",
+ "autoCapture": true,
+ "autoRecall": true,
+ "retrieval": {
+ "mode": "hybrid",
+ "vectorWeight": 0.7,
+ "bm25Weight": 0.3,
+ "minScore": 0.3,
+ "rerank": "cross-encoder",
+ "rerankApiKey": "${JINA_API_KEY}",
+ "rerankModel": "jina-reranker-v3",
+ "rerankEndpoint": "https://api.jina.ai/v1/rerank",
+ "rerankProvider": "jina",
+ "candidatePoolSize": 20,
+ "recencyHalfLifeDays": 14,
+ "recencyWeight": 0.1,
+ "filterNoise": true,
+ "lengthNormAnchor": 500,
+ "hardMinScore": 0.35,
+ "timeDecayHalfLifeDays": 60,
+ "reinforcementFactor": 0.5,
+ "maxHalfLifeMultiplier": 3
+ },
+ "enableManagementTools": false,
+ "scopes": {
+ "default": "global",
+ "definitions": {
+ "global": { "description": "Shared knowledge" },
+ "agent:discord-bot": { "description": "Discord bot private" }
+ },
+ "agentAccess": {
+ "discord-bot": ["global", "agent:discord-bot"]
+ }
+ },
+ "sessionMemory": {
+ "enabled": false,
+ "messageCount": 15
+ },
+ "smartExtraction": true,
+ "llm": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "gpt-4o-mini",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000
+}
+```
+
+
+
+
+Provider di embedding
+
+Funziona con **qualsiasi API di embedding compatibile OpenAI**:
+
+| Provider | Modello | Base URL | Dimensioni |
+| --- | --- | --- | --- |
+| **Jina** (consigliato) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |
+| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |
+| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |
+| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |
+| **Ollama** (locale) | `nomic-embed-text` | `http://localhost:11434/v1` | specifico del provider |
+
+
+
+
+Provider di rerank
+
+Il reranking cross-encoder supporta più provider tramite `rerankProvider`:
+
+| Provider | `rerankProvider` | Modello di esempio |
+| --- | --- | --- |
+| **Jina** (predefinito) | `jina` | `jina-reranker-v3` |
+| **SiliconFlow** (piano gratuito disponibile) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |
+| **Voyage AI** | `voyage` | `rerank-2.5` |
+| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |
+
+Funziona anche qualsiasi endpoint di rerank compatibile Jina — imposta `rerankProvider: "jina"` e punta `rerankEndpoint` al tuo servizio (ad es. Hugging Face TEI, DashScope `qwen3-rerank`).
+
+
+
+
+Estrazione intelligente (LLM) — v1.1.0
+
+Quando `smartExtraction` è abilitato (predefinito: `true`), il plugin utilizza un LLM per estrarre e classificare intelligentemente i ricordi invece di trigger basati su regex.
+
+| Campo | Tipo | Predefinito | Descrizione |
+|-------|------|---------|-------------|
+| `smartExtraction` | boolean | `true` | Abilita/disabilita l'estrazione LLM in 6 categorie |
+| `llm.auth` | string | `api-key` | `api-key` usa `llm.apiKey` / `embedding.apiKey`; `oauth` usa un file token OAuth con scope plugin per impostazione predefinita |
+| `llm.apiKey` | string | *(fallback su `embedding.apiKey`)* | Chiave API per il provider LLM |
+| `llm.model` | string | `openai/gpt-oss-120b` | Nome del modello LLM |
+| `llm.baseURL` | string | *(fallback su `embedding.baseURL`)* | Endpoint API LLM |
+| `llm.oauthProvider` | string | `openai-codex` | ID del provider OAuth usato quando `llm.auth` è `oauth` |
+| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | File token OAuth usato quando `llm.auth` è `oauth` |
+| `llm.timeoutMs` | number | `30000` | Timeout della richiesta LLM in millisecondi |
+| `extractMinMessages` | number | `2` | Messaggi minimi prima che l'estrazione si attivi |
+| `extractMaxChars` | number | `8000` | Caratteri massimi inviati al LLM |
+
+
+Configurazione `llm` OAuth (usa la cache di login esistente di Codex / ChatGPT per le chiamate LLM):
+```json
+{
+ "llm": {
+ "auth": "oauth",
+ "oauthProvider": "openai-codex",
+ "model": "gpt-5.4",
+ "oauthPath": "${HOME}/.openclaw/.memory-lancedb-pro/oauth.json",
+ "timeoutMs": 30000
+ }
+}
+```
+
+Note per `llm.auth: "oauth"`:
+
+- `llm.oauthProvider` è attualmente `openai-codex`.
+- I token OAuth sono salvati di default in `~/.openclaw/.memory-lancedb-pro/oauth.json`.
+- Puoi impostare `llm.oauthPath` se vuoi salvare quel file altrove.
+- `auth login` crea uno snapshot della configurazione `llm` precedente con api-key accanto al file OAuth, e `auth logout` ripristina quello snapshot quando disponibile.
+- Il passaggio da `api-key` a `oauth` non trasferisce automaticamente `llm.baseURL`. Impostalo manualmente in modalità OAuth solo quando vuoi intenzionalmente un backend personalizzato compatibile ChatGPT/Codex.
+
+
+
+
+Configurazione ciclo di vita (Decadimento + Livelli)
+
+| Campo | Predefinito | Descrizione |
+|-------|---------|-------------|
+| `decay.recencyHalfLifeDays` | `30` | Emivita base per il decadimento di freschezza Weibull |
+| `decay.frequencyWeight` | `0.3` | Peso della frequenza di accesso nel punteggio composito |
+| `decay.intrinsicWeight` | `0.3` | Peso di `importance × confidence` |
+| `decay.betaCore` | `0.8` | Beta Weibull per i ricordi `core` |
+| `decay.betaWorking` | `1.0` | Beta Weibull per i ricordi `working` |
+| `decay.betaPeripheral` | `1.3` | Beta Weibull per i ricordi `peripheral` |
+| `tier.coreAccessThreshold` | `10` | Conteggio minimo richiami prima della promozione a `core` |
+| `tier.peripheralAgeDays` | `60` | Soglia di età per retrocedere i ricordi inattivi |
+
+
+
+
+Rinforzo per accesso
+
+I ricordi richiamati frequentemente decadono più lentamente (stile ripetizione spaziata).
+
+Chiavi di configurazione (sotto `retrieval`):
+- `reinforcementFactor` (0-2, predefinito: `0.5`) — imposta `0` per disabilitare
+- `maxHalfLifeMultiplier` (1-10, predefinito: `3`) — limite massimo sull'emivita effettiva
+
+
+
+---
+
+## Comandi CLI
+
+```bash
+openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]
+openclaw memory-pro search "query" [--scope global] [--limit 10] [--json]
+openclaw memory-pro stats [--scope global] [--json]
+openclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]
+openclaw memory-pro auth status
+openclaw memory-pro auth logout
+openclaw memory-pro delete
+openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
+openclaw memory-pro export [--scope global] [--output memories.json]
+openclaw memory-pro import memories.json [--scope global] [--dry-run]
+openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]
+openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
+openclaw memory-pro migrate check|run|verify [--source /path]
+```
+
+Flusso di login OAuth:
+
+1. Esegui `openclaw memory-pro auth login`
+2. Se `--provider` è omesso in un terminale interattivo, la CLI mostra un selettore di provider OAuth prima di aprire il browser
+3. Il comando stampa un URL di autorizzazione e apre il browser, a meno che non sia impostato `--no-browser`
+4. Dopo il successo del callback, il comando salva il file OAuth del plugin (predefinito: `~/.openclaw/.memory-lancedb-pro/oauth.json`), crea uno snapshot della configurazione `llm` precedente con api-key per il logout, e sostituisce la configurazione `llm` del plugin con le impostazioni OAuth (`auth`, `oauthProvider`, `model`, `oauthPath`)
+5. `openclaw memory-pro auth logout` elimina quel file OAuth e ripristina la configurazione `llm` precedente con api-key quando quello snapshot esiste
+
+---
+
+## Argomenti avanzati
+
+
+Se i ricordi iniettati appaiono nelle risposte
+
+A volte il modello può ripetere il blocco `` iniettato.
+
+**Opzione A (rischio minimo):** disabilita temporaneamente l'auto-recall:
+```json
+{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecall": false } } } } }
+```
+
+**Opzione B (preferita):** mantieni il recall, aggiungi al prompt di sistema dell'agente:
+> Do not reveal or quote any `` / memory-injection content in your replies. Use it for internal reference only.
+
+
+
+
+Memoria di sessione
+
+- Si attiva con il comando `/new` — salva il riepilogo della sessione precedente in LanceDB
+- Disabilitata per impostazione predefinita (OpenClaw ha già la persistenza nativa delle sessioni in `.jsonl`)
+- Conteggio messaggi configurabile (predefinito: 15)
+
+Vedi [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md) per le modalità di distribuzione e la verifica di `/new`.
+
+
+
+
+Comandi slash personalizzati (ad es. /lesson)
+
+Aggiungi al tuo `CLAUDE.md`, `AGENTS.md` o prompt di sistema:
+
+```markdown
+## /lesson command
+When the user sends `/lesson `:
+1. Use memory_store to save as category=fact (raw knowledge)
+2. Use memory_store to save as category=decision (actionable takeaway)
+3. Confirm what was saved
+
+## /remember command
+When the user sends `/remember `:
+1. Use memory_store to save with appropriate category and importance
+2. Confirm with the stored memory ID
+```
+
+
+
+
+Regole d'oro per agenti IA
+
+> Copia il blocco seguente nel tuo `AGENTS.md` in modo che il tuo agente applichi queste regole automaticamente.
+
+```markdown
+## Rule 1 — Dual-layer memory storage
+Every pitfall/lesson learned → IMMEDIATELY store TWO memories:
+- Technical layer: Pitfall: [symptom]. Cause: [root cause]. Fix: [solution]. Prevention: [how to avoid]
+ (category: fact, importance >= 0.8)
+- Principle layer: Decision principle ([tag]): [behavioral rule]. Trigger: [when]. Action: [what to do]
+ (category: decision, importance >= 0.85)
+
+## Rule 2 — LanceDB hygiene
+Entries must be short and atomic (< 500 chars). No raw conversation summaries or duplicates.
+
+## Rule 3 — Recall before retry
+On ANY tool failure, ALWAYS memory_recall with relevant keywords BEFORE retrying.
+
+## Rule 4 — Confirm target codebase
+Confirm you are editing memory-lancedb-pro vs built-in memory-lancedb before changes.
+
+## Rule 5 — Clear jiti cache after plugin code changes
+After modifying .ts files under plugins/, MUST run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.
+```
+
+
+
+
+Schema del database
+
+Tabella LanceDB `memories`:
+
+| Campo | Tipo | Descrizione |
+| --- | --- | --- |
+| `id` | string (UUID) | Chiave primaria |
+| `text` | string | Testo del ricordo (indicizzato FTS) |
+| `vector` | float[] | Vettore di embedding |
+| `category` | string | Categoria di archiviazione: `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |
+| `scope` | string | Identificatore scope (ad es. `global`, `agent:main`) |
+| `importance` | float | Punteggio di importanza 0-1 |
+| `timestamp` | int64 | Timestamp di creazione (ms) |
+| `metadata` | string (JSON) | Metadati estesi |
+
+Chiavi `metadata` comuni nella v1.1.0: `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`
+
+> **Nota sulle categorie:** Il campo `category` di primo livello usa 6 categorie di archiviazione. Le 6 etichette semantiche dell'Estrazione Intelligente (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`) sono memorizzate in `metadata.memory_category`.
+
+
+
+
+Risoluzione dei problemi
+
+### "Cannot mix BigInt and other types" (LanceDB / Apache Arrow)
+
+Con LanceDB 0.26+, alcune colonne numeriche potrebbero essere restituite come `BigInt`. Aggiorna a **memory-lancedb-pro >= 1.0.14** — questo plugin ora converte i valori usando `Number(...)` prima delle operazioni aritmetiche.
+
+
+
+---
+
+## Documentazione
+
+| Documento | Descrizione |
+| --- | --- |
+| [Playbook di integrazione OpenClaw](docs/openclaw-integration-playbook.md) | Modalità di distribuzione, verifica, matrice di regressione |
+| [Analisi dell'architettura della memoria](docs/memory_architecture_analysis.md) | Analisi approfondita dell'architettura completa |
+| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | Modifiche comportamentali v1.1.0 e motivazioni per l'upgrade |
+| [Chunking contesto lungo](docs/long-context-chunking.md) | Strategia di chunking per documenti lunghi |
+
+---
+
+## Beta: Smart Memory v1.1.0
+
+> Stato: Beta — disponibile tramite `npm i memory-lancedb-pro@beta`. Gli utenti stabili su `latest` non sono interessati.
+
+| Funzionalità | Descrizione |
+|---------|-------------|
+| **Estrazione intelligente** | Estrazione LLM in 6 categorie con metadati L0/L1/L2. Fallback su regex se disabilitato. |
+| **Punteggio ciclo di vita** | Decadimento Weibull integrato nella ricerca — i ricordi frequenti e importanti si posizionano più in alto. |
+| **Gestione livelli** | Sistema a tre livelli (Core → Working → Peripheral) con promozione/retrocessione automatica. |
+
+Feedback: [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · Ripristina: `npm i memory-lancedb-pro@latest`
+
+---
+
+## Dipendenze
+
+| Pacchetto | Scopo |
+| --- | --- |
+| `@lancedb/lancedb` ≥0.26.2 | Database vettoriale (ANN + FTS) |
+| `openai` ≥6.21.0 | Client API Embedding compatibile OpenAI |
+| `@sinclair/typebox` 0.34.48 | Definizioni di tipo JSON Schema |
+
+---
+
+## Contributors
+
+
+
+
+
+
+
+
+
+
+
+
+
+Full list: [Contributors](https://github.com/CortexReach/memory-lancedb-pro/graphs/contributors)
+
+## Star History
+
+
+
+
+
+
+
+
+
+## Licenza
+
+MIT
+
+---
+
+## Il mio QR Code WeChat
+
+
diff --git a/README_JA.md b/README_JA.md
new file mode 100644
index 00000000..0627a2de
--- /dev/null
+++ b/README_JA.md
@@ -0,0 +1,773 @@
+
+
+# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin
+
+**[OpenClaw](https://github.com/openclaw/openclaw) 에이전트를 위한 AI 메모리 어시스턴트**
+
+*AI 에이전트에게 진짜 기억하는 두뇌를 선물하세요 — 세션을 넘어, 에이전트를 넘어, 시간을 넘어.*
+
+LanceDB 기반 OpenClaw 메모리 플러그인으로, 사용자 선호도·의사결정·프로젝트 맥락을 저장하고 이후 세션에서 자동으로 불러옵니다.
+
+[](https://github.com/openclaw/openclaw)
+[](https://www.npmjs.com/package/memory-lancedb-pro)
+[](https://lancedb.com)
+[](LICENSE)
+
+[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)
+
+
+
+---
+
+## 왜 memory-lancedb-pro인가?
+
+대부분의 AI 에이전트는 건망증이 있습니다. 새 채팅을 시작하는 순간 모든 것을 잊어버립니다.
+
+**memory-lancedb-pro**는 OpenClaw를 위한 프로덕션 수준의 장기 기억 플러그인으로, 에이전트를 **AI 메모리 어시스턴트**로 바꿔줍니다 — 중요한 내용을 자동으로 캡처하고, 노이즈는 자연스럽게 희미해지게 하며, 적시에 적절한 기억을 검색합니다. 수동 태그 지정도, 복잡한 설정도 필요 없습니다.
+
+### AI 메모리 어시스턴트 실제 사용 모습
+
+**기억 없이 — 매 세션이 처음부터 시작:**
+
+> **사용자:** "들여쓰기에 탭을 사용하고, 항상 에러 처리를 추가해."
+> *(다음 세션)*
+> **사용자:** "이미 말했잖아 — 스페이스 말고 탭이라고!" 😤
+> *(다음 세션)*
+> **사용자:** "...진짜로, 탭이라고. 에러 처리도. 또."
+
+**memory-lancedb-pro와 함께 — 에이전트가 학습하고 기억합니다:**
+
+> **사용자:** "들여쓰기에 탭을 사용하고, 항상 에러 처리를 추가해."
+> *(다음 세션 — 에이전트가 사용자 선호도를 자동으로 불러옴)*
+> **에이전트:** *(자동으로 탭 + 에러 처리 적용)* ✅
+> **사용자:** "지난달에 왜 MongoDB 대신 PostgreSQL을 선택했지?"
+> **에이전트:** "2월 12일 논의 내용에 따르면, 주요 이유는..." ✅
+
+이것이 **AI 메모리 어시스턴트**가 만드는 차이입니다 — 사용자의 스타일을 학습하고, 과거 결정을 불러오며, 반복 없이 개인화된 응답을 제공합니다.
+
+### 그 외 무엇을 할 수 있나요?
+
+| | 제공 기능 |
+|---|---|
+| **Auto-Capture** | 에이전트가 모든 대화에서 학습 — 수동 `memory_store` 불필요 |
+| **Smart Extraction** | LLM 기반 6개 카테고리 분류: profile, preferences, entities, events, cases, patterns |
+| **Intelligent Forgetting** | Weibull 감쇠 모델 — 중요한 기억은 유지, 노이즈는 자연스럽게 사라짐 |
+| **Hybrid Retrieval** | 벡터 + BM25 전문 검색, Cross-Encoder 리랭킹으로 융합 |
+| **Context Injection** | 관련 기억이 매 응답 전에 자동으로 불러와짐 |
+| **Multi-Scope Isolation** | 에이전트별, 사용자별, 프로젝트별 메모리 경계 |
+| **Any Provider** | OpenAI, Jina, Gemini, Ollama 또는 OpenAI 호환 API 모두 지원 |
+| **Full Toolkit** | CLI, 백업, 마이그레이션, 업그레이드, 내보내기/가져오기 — 프로덕션 환경에 적합 |
+
+---
+
+## 빠른 시작
+
+### 옵션 A: 원클릭 설치 스크립트 (권장)
+
+커뮤니티에서 관리하는 **[설치 스크립트](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**가 설치, 업그레이드, 복구를 하나의 명령어로 처리합니다:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
+bash setup-memory.sh
+```
+
+> 스크립트가 다루는 전체 시나리오와 기타 커뮤니티 도구 목록은 아래 [에코시스템](#에코시스템)을 참조하세요.
+
+### 옵션 B: 수동 설치
+
+**OpenClaw CLI를 통한 설치 (권장):**
+```bash
+openclaw plugins install memory-lancedb-pro@beta
+```
+
+**또는 npm을 통한 설치:**
+```bash
+npm i memory-lancedb-pro@beta
+```
+> npm을 사용하는 경우, `openclaw.json`의 `plugins.load.paths`에 플러그인 설치 디렉터리의 **절대** 경로를 추가해야 합니다. 이것이 가장 흔한 설정 문제입니다.
+
+`openclaw.json`에 다음을 추가하세요:
+
+```json
+{
+ "plugins": {
+ "slots": { "memory": "memory-lancedb-pro" },
+ "entries": {
+ "memory-lancedb-pro": {
+ "enabled": true,
+ "config": {
+ "embedding": {
+ "provider": "openai-compatible",
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "text-embedding-3-small"
+ },
+ "autoCapture": true,
+ "autoRecall": true,
+ "smartExtraction": true,
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000,
+ "sessionMemory": { "enabled": false }
+ }
+ }
+ }
+ }
+}
+```
+
+**왜 이러한 기본값인가?**
+- `autoCapture` + `smartExtraction` → 에이전트가 모든 대화에서 자동으로 학습
+- `autoRecall` → 매 응답 전에 관련 기억이 주입됨
+- `extractMinMessages: 2` → 일반적인 두 턴 대화에서 추출이 시작됨
+- `sessionMemory.enabled: false` → 초기에 세션 요약으로 검색이 오염되는 것을 방지
+
+검증 및 재시작:
+
+```bash
+openclaw config validate
+openclaw gateway restart
+openclaw logs --follow --plain | grep "memory-lancedb-pro"
+```
+
+다음이 표시되어야 합니다:
+- `memory-lancedb-pro: smart extraction enabled`
+- `memory-lancedb-pro@...: plugin registered`
+
+완료! 이제 에이전트가 장기 기억을 갖게 됩니다.
+
+
+추가 설치 경로 (기존 사용자, 업그레이드)
+
+**이미 OpenClaw를 사용 중인 경우:**
+
+1. **절대** 경로의 `plugins.load.paths` 항목으로 플러그인 추가
+2. 메모리 슬롯 바인딩: `plugins.slots.memory = "memory-lancedb-pro"`
+3. 확인: `openclaw plugins info memory-lancedb-pro && openclaw memory-pro stats`
+
+**v1.1.0 이전 버전에서 업그레이드하는 경우:**
+
+```bash
+# 1) 백업
+openclaw memory-pro export --scope global --output memories-backup.json
+# 2) 시뮬레이션 실행
+openclaw memory-pro upgrade --dry-run
+# 3) 업그레이드 실행
+openclaw memory-pro upgrade
+# 4) 확인
+openclaw memory-pro stats
+```
+
+동작 변경사항과 업그레이드 근거는 `CHANGELOG-v1.1.0.md`를 참조하세요.
+
+
+
+
+Telegram 봇 빠른 가져오기 (클릭하여 펼치기)
+
+OpenClaw의 Telegram 연동을 사용하는 경우, 수동으로 설정을 편집하는 대신 메인 봇에 가져오기 명령어를 직접 보내는 것이 가장 쉬운 방법입니다.
+
+다음 메시지를 전송하세요 (봇에 그대로 복사하여 붙여넣기하는 영문 프롬프트입니다):
+
+```text
+Help me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro
+
+Requirements:
+1. Set it as the only active memory plugin
+2. Use Jina for embedding
+3. Use Jina for reranker
+4. Use gpt-4o-mini for the smart-extraction LLM
+5. Enable autoCapture, autoRecall, smartExtraction
+6. extractMinMessages=2
+7. sessionMemory.enabled=false
+8. captureAssistant=false
+9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
+10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
+11. Generate the final openclaw.json config directly, not just an explanation
+```
+
+
+
+---
+
+## 에코시스템
+
+memory-lancedb-pro는 핵심 플러그인입니다. 커뮤니티에서 설정과 일상적인 사용을 더욱 원활하게 만드는 도구들을 구축했습니다:
+
+### 설치 스크립트 — 원클릭 설치, 업그레이드 및 복구
+
+> **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**
+
+단순한 인스톨러가 아닙니다 — 스크립트가 다양한 실제 시나리오를 지능적으로 처리합니다:
+
+| 상황 | 스크립트의 동작 |
+|---|---|
+| 설치한 적 없음 | 새로 다운로드 → 의존성 설치 → 설정 선택 → openclaw.json에 기록 → 재시작 |
+| `git clone`으로 설치, 이전 커밋에서 멈춤 | 자동 `git fetch` + `checkout`으로 최신 버전 이동 → 의존성 재설치 → 확인 |
+| 설정에 유효하지 않은 필드 존재 | 스키마 필터를 통한 자동 감지, 지원되지 않는 필드 제거 |
+| `npm`으로 설치 | git 업데이트 건너뜀, `npm update` 직접 실행 알림 |
+| 유효하지 않은 설정으로 `openclaw` CLI 동작 불가 | 대체 방법: `openclaw.json` 파일에서 직접 워크스페이스 경로 읽기 |
+| `plugins/` 대신 `extensions/` 사용 | 설정 또는 파일시스템에서 플러그인 위치 자동 감지 |
+| 이미 최신 상태 | 상태 확인만 실행, 변경 없음 |
+
+```bash
+bash setup-memory.sh # 설치 또는 업그레이드
+bash setup-memory.sh --dry-run # 미리보기만
+bash setup-memory.sh --beta # 사전 릴리스 버전 포함
+bash setup-memory.sh --uninstall # 설정 복원 및 플러그인 제거
+```
+
+내장 프로바이더 프리셋: **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, 또는 자체 OpenAI 호환 API를 사용할 수 있습니다. `--ref`, `--selfcheck-only` 등 전체 사용법은 [설치 스크립트 README](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)를 참조하세요.
+
+### Claude Code / OpenClaw Skill — AI 가이드 설정
+
+> **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**
+
+이 Skill을 설치하면 AI 에이전트(Claude Code 또는 OpenClaw)가 memory-lancedb-pro의 모든 기능에 대한 깊은 지식을 갖게 됩니다. **"최적의 설정을 도와줘"**라고 말하면 다음을 제공합니다:
+
+- **가이드 7단계 설정 워크플로우**와 4가지 배포 계획:
+ - Full Power (Jina + OpenAI) / Budget (무료 SiliconFlow 리랭커) / Simple (OpenAI만) / Fully Local (Ollama, API 비용 제로)
+- **모든 9개 MCP 도구**의 올바른 사용법: `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(전체 도구 세트를 사용하려면 `enableManagementTools: true`가 필요합니다 — 기본 빠른 시작 설정은 4개 핵심 도구만 노출합니다)*
+- **일반적인 함정 방지**: 워크스페이스 플러그인 활성화, `autoRecall` 기본값 false, jiti 캐시, 환경 변수, 스코프 격리 등
+
+**Claude Code용 설치:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro
+```
+
+**OpenClaw용 설치:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill
+```
+
+---
+
+## 비디오 튜토리얼
+
+> 전체 안내: 설치, 설정, 하이브리드 검색 내부 구조.
+
+[](https://youtu.be/MtukF1C8epQ)
+**https://youtu.be/MtukF1C8epQ**
+
+[](https://www.bilibili.com/video/BV1zUf2BGEgn/)
+**https://www.bilibili.com/video/BV1zUf2BGEgn/**
+
+---
+
+## 아키텍처
+
+```
+┌─────────────────────────────────────────────────────────┐
+│ index.ts (진입점) │
+│ 플러그인 등록 · 설정 파싱 · 라이프사이클 훅 │
+└────────┬──────────┬──────────┬──────────┬───────────────┘
+ │ │ │ │
+ ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐
+ │ store │ │embedder│ │retriever│ │ scopes │
+ │ .ts │ │ .ts │ │ .ts │ │ .ts │
+ └────────┘ └────────┘ └────────┘ └─────────────┘
+ │ │
+ ┌────▼───┐ ┌─────▼──────────┐
+ │migrate │ │noise-filter.ts │
+ │ .ts │ │adaptive- │
+ └────────┘ │retrieval.ts │
+ └────────────────┘
+ ┌─────────────┐ ┌──────────┐
+ │ tools.ts │ │ cli.ts │
+ │ (에이전트API)│ │ (CLI) │
+ └─────────────┘ └──────────┘
+```
+
+> 전체 아키텍처에 대한 심층 분석은 [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md)를 참조하세요.
+
+
+파일 레퍼런스 (클릭하여 펼치기)
+
+| 파일 | 용도 |
+| --- | --- |
+| `index.ts` | 플러그인 진입점. OpenClaw Plugin API에 등록, 설정 파싱, 라이프사이클 훅 마운트 |
+| `openclaw.plugin.json` | 플러그인 메타데이터 + 전체 JSON Schema 설정 선언 |
+| `cli.ts` | CLI 명령어: `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |
+| `src/store.ts` | LanceDB 스토리지 레이어. 테이블 생성 / FTS 인덱싱 / 벡터 검색 / BM25 검색 / CRUD |
+| `src/embedder.ts` | 임베딩 추상화. OpenAI 호환 API 프로바이더 모두 지원 |
+| `src/retriever.ts` | 하이브리드 검색 엔진. 벡터 + BM25 → 하이브리드 퓨전 → 리랭크 → 라이프사이클 감쇠 → 필터 |
+| `src/scopes.ts` | 멀티 스코프 접근 제어 |
+| `src/tools.ts` | 에이전트 도구 정의: `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + 관리 도구 |
+| `src/noise-filter.ts` | 에이전트 거절, 메타 질문, 인사, 저품질 콘텐츠 필터링 |
+| `src/adaptive-retrieval.ts` | 쿼리에 메모리 검색이 필요한지 판단 |
+| `src/migrate.ts` | 내장 `memory-lancedb`에서 Pro로의 마이그레이션 |
+| `src/smart-extractor.ts` | LLM 기반 6개 카테고리 추출 + L0/L1/L2 계층 저장 + 2단계 중복 제거 |
+| `src/decay-engine.ts` | Weibull 확장 지수 감쇠 모델 |
+| `src/tier-manager.ts` | 3단계 승격/강등: Peripheral ↔ Working ↔ Core |
+
+
+
+---
+
+## 핵심 기능
+
+### 하이브리드 검색
+
+```
+Query → embedQuery() ─┐
+ ├─→ Hybrid Fusion → Rerank → Lifecycle Decay Boost → Length Norm → Filter
+Query → BM25 FTS ─────┘
+```
+
+- **벡터 검색** — LanceDB ANN을 통한 의미적 유사도 (코사인 거리)
+- **BM25 전문 검색** — LanceDB FTS 인덱스를 통한 정확한 키워드 매칭
+- **하이브리드 퓨전** — 벡터 스코어를 기본으로, BM25 히트에 가중 부스트 적용 (표준 RRF가 아님 — 실제 검색 품질에 맞게 튜닝됨)
+- **가중치 설정 가능** — `vectorWeight`, `bm25Weight`, `minScore`
+
+### Cross-Encoder 리랭킹
+
+- **Jina**, **SiliconFlow**, **Voyage AI**, **Pinecone** 내장 어댑터
+- Jina 호환 엔드포인트와 호환 (예: Hugging Face TEI, DashScope)
+- 하이브리드 스코어링: Cross-Encoder 60% + 원래 퓨전 스코어 40%
+- 그레이스풀 디그레이데이션: API 실패 시 코사인 유사도로 폴백
+
+### 다단계 스코어링 파이프라인
+
+| 단계 | 효과 |
+| --- | --- |
+| **하이브리드 퓨전** | 의미적 검색과 정확한 매칭 결합 |
+| **Cross-Encoder 리랭크** | 의미적으로 정확한 결과 승격 |
+| **라이프사이클 감쇠 부스트** | Weibull 최신성 + 접근 빈도 + 중요도 × 신뢰도 |
+| **길이 정규화** | 긴 항목이 결과를 지배하는 것을 방지 (앵커: 500자) |
+| **최소 점수 하한** | 관련 없는 결과 제거 (기본값: 0.35) |
+| **MMR 다양성** | 코사인 유사도 > 0.85 → 강등 |
+
+### Smart Memory Extraction (v1.1.0)
+
+- **LLM 기반 6개 카테고리 추출**: profile, preferences, entities, events, cases, patterns
+- **L0/L1/L2 계층 저장**: L0 (한 줄 인덱스) → L1 (구조화된 요약) → L2 (전체 내러티브)
+- **2단계 중복 제거**: 벡터 유사도 사전 필터 (≥0.7) → LLM 의미 판단 (CREATE/MERGE/SKIP)
+- **카테고리 인식 병합**: `profile`은 항상 병합, `events`/`cases`는 추가 전용
+
+### 메모리 라이프사이클 관리 (v1.1.0)
+
+- **Weibull 감쇠 엔진**: 복합 점수 = 최신성 + 빈도 + 내재적 가치
+- **3단계 승격**: `Peripheral ↔ Working ↔ Core`, 설정 가능한 임계값
+- **접근 강화**: 자주 불러오는 기억은 더 느리게 감쇠 (간격 반복 학습 방식)
+- **중요도 조절 반감기**: 중요한 기억은 더 느리게 감쇠
+
+### Multi-Scope 격리
+
+- 내장 스코프: `global`, `agent:`, `custom:`, `project:`, `user:`
+- `scopes.agentAccess`를 통한 에이전트 수준 접근 제어
+- 기본값: 각 에이전트가 `global` + 자체 `agent:` 스코프에 접근
+
+### Auto-Capture 및 Auto-Recall
+
+- **Auto-Capture** (`agent_end`): 대화에서 선호도/사실/결정/엔티티를 추출, 중복 제거, 턴당 최대 3개 저장
+- **Auto-Recall** (`before_agent_start`): `` 컨텍스트 주입 (최대 3개 항목)
+
+### 노이즈 필터링 및 적응형 검색
+
+- 저품질 콘텐츠 필터링: 에이전트 거절, 메타 질문, 인사
+- 인사, 슬래시 명령어, 간단한 확인, 이모지에 대해서는 검색 건너뜀
+- 기억 키워드에 대해서는 검색 강제 실행 ("기억해", "이전에", "지난번에")
+- CJK 인식 임계값 (중국어: 6자 vs 영어: 15자)
+
+---
+
+
+내장 memory-lancedb와의 비교 (클릭하여 펼치기)
+
+| 기능 | 내장 `memory-lancedb` | **memory-lancedb-pro** |
+| --- | :---: | :---: |
+| 벡터 검색 | 예 | 예 |
+| BM25 전문 검색 | - | 예 |
+| 하이브리드 퓨전 (벡터 + BM25) | - | 예 |
+| Cross-Encoder 리랭크 (멀티 프로바이더) | - | 예 |
+| 최신성 부스트 및 시간 감쇠 | - | 예 |
+| 길이 정규화 | - | 예 |
+| MMR 다양성 | - | 예 |
+| 멀티 스코프 격리 | - | 예 |
+| 노이즈 필터링 | - | 예 |
+| 적응형 검색 | - | 예 |
+| 관리 CLI | - | 예 |
+| 세션 메모리 | - | 예 |
+| 태스크 인식 임베딩 | - | 예 |
+| **LLM Smart Extraction (6개 카테고리)** | - | 예 (v1.1.0) |
+| **Weibull 감쇠 + 단계 승격** | - | 예 (v1.1.0) |
+| OpenAI 호환 임베딩 | 제한적 | 예 |
+
+
+
+---
+
+## 설정
+
+
+전체 설정 예시
+
+```json
+{
+ "embedding": {
+ "apiKey": "${JINA_API_KEY}",
+ "model": "jina-embeddings-v5-text-small",
+ "baseURL": "https://api.jina.ai/v1",
+ "dimensions": 1024,
+ "taskQuery": "retrieval.query",
+ "taskPassage": "retrieval.passage",
+ "normalized": true
+ },
+ "dbPath": "~/.openclaw/memory/lancedb-pro",
+ "autoCapture": true,
+ "autoRecall": true,
+ "retrieval": {
+ "mode": "hybrid",
+ "vectorWeight": 0.7,
+ "bm25Weight": 0.3,
+ "minScore": 0.3,
+ "rerank": "cross-encoder",
+ "rerankApiKey": "${JINA_API_KEY}",
+ "rerankModel": "jina-reranker-v3",
+ "rerankEndpoint": "https://api.jina.ai/v1/rerank",
+ "rerankProvider": "jina",
+ "candidatePoolSize": 20,
+ "recencyHalfLifeDays": 14,
+ "recencyWeight": 0.1,
+ "filterNoise": true,
+ "lengthNormAnchor": 500,
+ "hardMinScore": 0.35,
+ "timeDecayHalfLifeDays": 60,
+ "reinforcementFactor": 0.5,
+ "maxHalfLifeMultiplier": 3
+ },
+ "enableManagementTools": false,
+ "scopes": {
+ "default": "global",
+ "definitions": {
+ "global": { "description": "Shared knowledge" },
+ "agent:discord-bot": { "description": "Discord bot private" }
+ },
+ "agentAccess": {
+ "discord-bot": ["global", "agent:discord-bot"]
+ }
+ },
+ "sessionMemory": {
+ "enabled": false,
+ "messageCount": 15
+ },
+ "smartExtraction": true,
+ "llm": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "gpt-4o-mini",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000
+}
+```
+
+
+
+
+임베딩 프로바이더
+
+**OpenAI 호환 임베딩 API**와 모두 동작합니다:
+
+| 프로바이더 | 모델 | Base URL | 차원 |
+| --- | --- | --- | --- |
+| **Jina** (권장) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |
+| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |
+| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |
+| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |
+| **Ollama** (로컬) | `nomic-embed-text` | `http://localhost:11434/v1` | 프로바이더별 상이 |
+
+
+
+
+리랭크 프로바이더
+
+Cross-Encoder 리랭킹은 `rerankProvider`를 통해 여러 프로바이더를 지원합니다:
+
+| 프로바이더 | `rerankProvider` | 예시 모델 |
+| --- | --- | --- |
+| **Jina** (기본값) | `jina` | `jina-reranker-v3` |
+| **SiliconFlow** (무료 티어 제공) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |
+| **Voyage AI** | `voyage` | `rerank-2.5` |
+| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |
+
+Jina 호환 리랭크 엔드포인트도 사용 가능합니다 — `rerankProvider: "jina"`로 설정하고 `rerankEndpoint`를 해당 서비스로 지정하세요 (예: Hugging Face TEI, DashScope `qwen3-rerank`).
+
+
+
+
+Smart Extraction (LLM) — v1.1.0
+
+`smartExtraction`이 활성화되면 (기본값: `true`), 플러그인이 정규식 기반 트리거 대신 LLM을 사용하여 기억을 지능적으로 추출하고 분류합니다.
+
+| 필드 | 타입 | 기본값 | 설명 |
+|-------|------|---------|-------------|
+| `smartExtraction` | boolean | `true` | LLM 기반 6개 카테고리 추출 활성화/비활성화 |
+| `llm.auth` | string | `api-key` | `api-key`는 `llm.apiKey` / `embedding.apiKey`를 사용; `oauth`는 기본적으로 플러그인 범위의 OAuth 토큰 파일을 사용 |
+| `llm.apiKey` | string | *(`embedding.apiKey`로 폴백)* | LLM 프로바이더용 API 키 |
+| `llm.model` | string | `openai/gpt-oss-120b` | LLM 모델명 |
+| `llm.baseURL` | string | *(`embedding.baseURL`로 폴백)* | LLM API 엔드포인트 |
+| `llm.oauthProvider` | string | `openai-codex` | `llm.auth`가 `oauth`일 때 사용되는 OAuth 프로바이더 ID |
+| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | `llm.auth`가 `oauth`일 때 사용되는 OAuth 토큰 파일 |
+| `llm.timeoutMs` | number | `30000` | LLM 요청 타임아웃 (밀리초) |
+| `extractMinMessages` | number | `2` | 추출이 시작되는 최소 메시지 수 |
+| `extractMaxChars` | number | `8000` | LLM에 전송되는 최대 문자 수 |
+
+
+OAuth `llm` 설정 (기존 Codex / ChatGPT 로그인 캐시를 LLM 호출에 사용):
+```json
+{
+ "llm": {
+ "auth": "oauth",
+ "oauthProvider": "openai-codex",
+ "model": "gpt-5.4",
+ "oauthPath": "${HOME}/.openclaw/.memory-lancedb-pro/oauth.json",
+ "timeoutMs": 30000
+ }
+}
+```
+
+`llm.auth: "oauth"` 참고사항:
+
+- `llm.oauthProvider`는 현재 `openai-codex`입니다.
+- OAuth 토큰은 기본적으로 `~/.openclaw/.memory-lancedb-pro/oauth.json`에 저장됩니다.
+- 파일을 다른 곳에 저장하려면 `llm.oauthPath`를 설정하세요.
+- `auth login`은 OAuth 파일 옆에 이전 api-key `llm` 설정의 스냅샷을 저장하며, `auth logout`은 해당 스냅샷이 있을 때 복원합니다.
+- `api-key`에서 `oauth`로 전환할 때 `llm.baseURL`이 자동으로 이전되지 않습니다. OAuth 모드에서 의도적으로 사용자 정의 ChatGPT/Codex 호환 백엔드를 원하는 경우에만 수동으로 설정하세요.
+
+
+
+
+라이프사이클 설정 (감쇠 + 단계)
+
+| 필드 | 기본값 | 설명 |
+|-------|---------|-------------|
+| `decay.recencyHalfLifeDays` | `30` | Weibull 최신성 감쇠의 기본 반감기 |
+| `decay.frequencyWeight` | `0.3` | 복합 점수에서 접근 빈도의 가중치 |
+| `decay.intrinsicWeight` | `0.3` | `importance × confidence`의 가중치 |
+| `decay.betaCore` | `0.8` | `core` 기억의 Weibull 베타 |
+| `decay.betaWorking` | `1.0` | `working` 기억의 Weibull 베타 |
+| `decay.betaPeripheral` | `1.3` | `peripheral` 기억의 Weibull 베타 |
+| `tier.coreAccessThreshold` | `10` | `core`로 승격하기 위한 최소 호출 횟수 |
+| `tier.peripheralAgeDays` | `60` | 오래된 기억을 강등하기 위한 경과 일수 임계값 |
+
+
+
+
+접근 강화
+
+자주 불러오는 기억은 더 느리게 감쇠합니다 (간격 반복 학습 방식).
+
+설정 키 (`retrieval` 하위):
+- `reinforcementFactor` (0-2, 기본값: `0.5`) — `0`으로 설정하면 비활성화
+- `maxHalfLifeMultiplier` (1-10, 기본값: `3`) — 유효 반감기의 하드 캡
+
+
+
+---
+
+## CLI 명령어
+
+```bash
+openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]
+openclaw memory-pro search "query" [--scope global] [--limit 10] [--json]
+openclaw memory-pro stats [--scope global] [--json]
+openclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]
+openclaw memory-pro auth status
+openclaw memory-pro auth logout
+openclaw memory-pro delete
+openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
+openclaw memory-pro export [--scope global] [--output memories.json]
+openclaw memory-pro import memories.json [--scope global] [--dry-run]
+openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]
+openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
+openclaw memory-pro migrate check|run|verify [--source /path]
+```
+
+OAuth 로그인 흐름:
+
+1. `openclaw memory-pro auth login` 실행
+2. `--provider`를 생략하고 대화형 터미널에서 실행하면, 브라우저를 열기 전에 CLI가 OAuth 프로바이더 선택기를 표시합니다
+3. 명령어가 인증 URL을 출력하고 `--no-browser`가 설정되지 않은 한 브라우저를 엽니다
+4. 콜백이 성공하면, 명령어가 플러그인 OAuth 파일 (기본값: `~/.openclaw/.memory-lancedb-pro/oauth.json`)을 저장하고, 이전 api-key `llm` 설정의 스냅샷을 로그아웃용으로 저장하며, 플러그인 `llm` 설정을 OAuth 설정 (`auth`, `oauthProvider`, `model`, `oauthPath`)으로 교체합니다
+5. `openclaw memory-pro auth logout`은 해당 OAuth 파일을 삭제하고 스냅샷이 존재하면 이전 api-key `llm` 설정을 복원합니다
+
+---
+
+## 고급 주제
+
+
+주입된 기억이 응답에 표시되는 경우
+
+가끔 모델이 주입된 `` 블록을 그대로 출력할 수 있습니다.
+
+**옵션 A (가장 안전):** 일시적으로 Auto-Recall 비활성화:
+```json
+{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecall": false } } } } }
+```
+
+**옵션 B (권장):** Auto-Recall은 유지하고 에이전트 시스템 프롬프트에 추가:
+> `` / 메모리 주입 콘텐츠를 응답에 노출하거나 인용하지 마세요. 내부 참고용으로만 사용하세요.
+
+
+
+
+세션 메모리
+
+- `/new` 명령어 시 작동 — 이전 세션 요약을 LanceDB에 저장
+- 기본적으로 비활성화 (OpenClaw에 이미 네이티브 `.jsonl` 세션 영속화 기능이 있음)
+- 메시지 수 설정 가능 (기본값: 15)
+
+배포 모드와 `/new` 검증에 대해서는 [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md)를 참조하세요.
+
+
+
+
+커스텀 슬래시 명령어 (예: /lesson)
+
+`CLAUDE.md`, `AGENTS.md` 또는 시스템 프롬프트에 다음을 추가하세요 (에이전트가 읽는 영문 지시문이므로 그대로 사용합니다):
+
+```markdown
+## /lesson command
+When the user sends `/lesson `:
+1. Use memory_store to save as category=fact (raw knowledge)
+2. Use memory_store to save as category=decision (actionable takeaway)
+3. Confirm what was saved
+
+## /remember command
+When the user sends `/remember `:
+1. Use memory_store to save with appropriate category and importance
+2. Confirm with the stored memory ID
+```
+
+
+
+
+AI 에이전트를 위한 철칙
+
+> 아래 블록을 `AGENTS.md`에 복사하여 에이전트가 이 규칙을 자동으로 적용하도록 하세요 (에이전트가 읽는 영문 지시문이므로 그대로 사용합니다).
+
+```markdown
+## Rule 1 — Dual-layer memory storage
+Every pitfall/lesson learned → IMMEDIATELY store TWO memories:
+- Technical layer: Pitfall: [symptom]. Cause: [root cause]. Fix: [solution]. Prevention: [how to avoid]
+ (category: fact, importance >= 0.8)
+- Principle layer: Decision principle ([tag]): [behavioral rule]. Trigger: [when]. Action: [what to do]
+ (category: decision, importance >= 0.85)
+
+## Rule 2 — LanceDB hygiene
+Entries must be short and atomic (< 500 chars). No raw conversation summaries or duplicates.
+
+## Rule 3 — Recall before retry
+On ANY tool failure, ALWAYS memory_recall with relevant keywords BEFORE retrying.
+
+## Rule 4 — Confirm target codebase
+Confirm you are editing memory-lancedb-pro vs built-in memory-lancedb before changes.
+
+## Rule 5 — Clear jiti cache after plugin code changes
+After modifying .ts files under plugins/, MUST run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.
+```
+
+
+
+
+데이터베이스 스키마
+
+LanceDB 테이블 `memories`:
+
+| 필드 | 타입 | 설명 |
+| --- | --- | --- |
+| `id` | string (UUID) | 기본 키 |
+| `text` | string | 기억 텍스트 (FTS 인덱싱됨) |
+| `vector` | float[] | 임베딩 벡터 |
+| `category` | string | 저장 카테고리: `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |
+| `scope` | string | 스코프 식별자 (예: `global`, `agent:main`) |
+| `importance` | float | 중요도 점수 0-1 |
+| `timestamp` | int64 | 생성 타임스탬프 (ms) |
+| `metadata` | string (JSON) | 확장 메타데이터 |
+
+v1.1.0의 주요 `metadata` 키: `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`
+
+> **카테고리 참고:** 최상위 `category` 필드는 6개 저장 카테고리를 사용합니다. Smart Extraction의 6개 카테고리 의미 라벨 (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`)은 `metadata.memory_category`에 저장됩니다.
+
+
+
+
+문제 해결
+
+### "Cannot mix BigInt and other types" (LanceDB / Apache Arrow)
+
+LanceDB 0.26 이상에서 일부 숫자 열이 `BigInt`로 반환될 수 있습니다. **memory-lancedb-pro >= 1.0.14**로 업그레이드하세요 — 이 플러그인은 이제 산술 연산 전에 `Number(...)`를 사용하여 값을 변환합니다.
+
+
+
+---
+
+## 문서
+
+| 문서 | 설명 |
+| --- | --- |
+| [OpenClaw 통합 플레이북](docs/openclaw-integration-playbook.md) | 배포 모드, 검증, 회귀 매트릭스 |
+| [메모리 아키텍처 분석](docs/memory_architecture_analysis.md) | 전체 아키텍처 심층 분석 |
+| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | v1.1.0 동작 변경사항 및 업그레이드 근거 |
+| [장문 컨텍스트 청킹](docs/long-context-chunking.md) | 긴 문서를 위한 청킹 전략 |
+
+---
+
+## 베타: Smart Memory v1.1.0
+
+> 상태: 베타 — `npm i memory-lancedb-pro@beta`로 사용 가능. `latest`를 사용하는 안정 버전 사용자는 영향 없음.
+
+| 기능 | 설명 |
+|---------|-------------|
+| **Smart Extraction** | LLM 기반 6개 카테고리 추출 + L0/L1/L2 메타데이터. 비활성화 시 정규식으로 폴백. |
+| **라이프사이클 스코어링** | 검색에 Weibull 감쇠 통합 — 높은 빈도와 높은 중요도의 기억이 상위에 랭크. |
+| **단계 관리** | 3단계 시스템 (Core → Working → Peripheral), 자동 승격/강등. |
+
+피드백: [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · 되돌리기: `npm i memory-lancedb-pro@latest`
+
+---
+
+## 의존성
+
+| 패키지 | 용도 |
+| --- | --- |
+| `@lancedb/lancedb` ≥0.26.2 | 벡터 데이터베이스 (ANN + FTS) |
+| `openai` ≥6.21.0 | OpenAI 호환 Embedding API 클라이언트 |
+| `@sinclair/typebox` 0.34.48 | JSON Schema 타입 정의 |
+
+---
+
+## Contributors
+
+
+
+
+
+
+
+
+
+
+
+
+
+Full list: [Contributors](https://github.com/CortexReach/memory-lancedb-pro/graphs/contributors)
+
+## Star History
+
+
+
+
+
+
+
+
+
+## 라이선스
+
+MIT
+
+---
+
+## WeChat QR 코드
+
+
diff --git a/README_PT-BR.md b/README_PT-BR.md
new file mode 100644
index 00000000..65d721f8
--- /dev/null
+++ b/README_PT-BR.md
@@ -0,0 +1,773 @@
+
+
+# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin
+
+**Assistente de Memória IA para Agentes [OpenClaw](https://github.com/openclaw/openclaw)**
+
+*Dê ao seu agente de IA um cérebro que realmente lembra — entre sessões, entre agentes, ao longo do tempo.*
+
+Um plugin de memória de longo prazo para OpenClaw baseado em LanceDB que armazena preferências, decisões e contexto de projetos, e os recupera automaticamente em sessões futuras.
+
+[](https://github.com/openclaw/openclaw)
+[](https://www.npmjs.com/package/memory-lancedb-pro)
+[](https://lancedb.com)
+[](LICENSE)
+
+[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)
+
+
+
+---
+
+## Por que memory-lancedb-pro?
+
+A maioria dos agentes de IA sofre de amnésia. Eles esquecem tudo no momento em que você inicia um novo chat.
+
+**memory-lancedb-pro** é um plugin de memória de longo prazo de nível de produção para OpenClaw que transforma seu agente em um verdadeiro **Assistente de Memória IA** — captura automaticamente o que importa, deixa o ruído desaparecer naturalmente e recupera a memória certa no momento certo. Sem tags manuais, sem dores de cabeça com configuração.
+
+### Seu Assistente de Memória IA em ação
+
+**Sem memória — cada sessão começa do zero:**
+
+> **Você:** "Use tabs para indentação, sempre adicione tratamento de erros."
+> *(próxima sessão)*
+> **Você:** "Eu já te disse — tabs, não espaços!" 😤
+> *(próxima sessão)*
+> **Você:** "…sério, tabs. E tratamento de erros. De novo."
+
+**Com memory-lancedb-pro — seu agente aprende e lembra:**
+
+> **Você:** "Use tabs para indentação, sempre adicione tratamento de erros."
+> *(próxima sessão — agente recupera automaticamente suas preferências)*
+> **Agente:** *(aplica silenciosamente tabs + tratamento de erros)* ✅
+> **Você:** "Por que escolhemos PostgreSQL em vez de MongoDB no mês passado?"
+> **Agente:** "Com base na nossa discussão de 12 de fevereiro, os principais motivos foram…" ✅
+
+Essa é a diferença que um **Assistente de Memória IA** faz — aprende seu estilo, lembra decisões passadas e entrega respostas personalizadas sem você precisar se repetir.
+
+### O que mais ele pode fazer?
+
+| | O que você obtém |
+|---|---|
+| **Auto-Capture** | Seu agente aprende de cada conversa — sem necessidade de `memory_store` manual |
+| **Extração inteligente** | Classificação LLM em 6 categorias: perfis, preferências, entidades, eventos, casos, padrões |
+| **Esquecimento inteligente** | Modelo de decaimento Weibull — memórias importantes permanecem, ruído desaparece |
+| **Busca híbrida** | Busca vetorial + BM25 full-text, fundida com reranking cross-encoder |
+| **Injeção de contexto** | Memórias relevantes aparecem automaticamente antes de cada resposta |
+| **Isolamento multi-scope** | Limites de memória por agente, por usuário, por projeto |
+| **Qualquer provedor** | OpenAI, Jina, Gemini, Ollama ou qualquer API compatível com OpenAI |
+| **Toolkit completo** | CLI, backup, migração, upgrade, exportação/importação — pronto para produção |
+
+---
+
+## Início rápido
+
+### Opção A: Script de instalação com um clique (recomendado)
+
+O **[script de instalação](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)** mantido pela comunidade gerencia instalação, atualização e reparo em um único comando:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
+bash setup-memory.sh
+```
+
+> Veja [Ecossistema](#ecossistema) abaixo para a lista completa de cenários cobertos e outras ferramentas da comunidade.
+
+### Opção B: Instalação manual
+
+**Via OpenClaw CLI (recomendado):**
+```bash
+openclaw plugins install memory-lancedb-pro@beta
+```
+
+**Ou via npm:**
+```bash
+npm i memory-lancedb-pro@beta
+```
+> Se usar npm, você também precisará adicionar o diretório de instalação do plugin como caminho **absoluto** em `plugins.load.paths` no seu `openclaw.json`. Este é o problema de configuração mais comum.
+
+Adicione ao seu `openclaw.json`:
+
+```json
+{
+ "plugins": {
+ "slots": { "memory": "memory-lancedb-pro" },
+ "entries": {
+ "memory-lancedb-pro": {
+ "enabled": true,
+ "config": {
+ "embedding": {
+ "provider": "openai-compatible",
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "text-embedding-3-small"
+ },
+ "autoCapture": true,
+ "autoRecall": true,
+ "smartExtraction": true,
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000,
+ "sessionMemory": { "enabled": false }
+ }
+ }
+ }
+ }
+}
+```
+
+**Por que esses valores padrão?**
+- `autoCapture` + `smartExtraction` → seu agente aprende automaticamente de cada conversa
+- `autoRecall` → memórias relevantes são injetadas antes de cada resposta
+- `extractMinMessages: 2` → a extração é acionada em chats normais de dois turnos
+- `sessionMemory.enabled: false` → evita poluir a busca com resumos de sessão no início
+
+Valide e reinicie:
+
+```bash
+openclaw config validate
+openclaw gateway restart
+openclaw logs --follow --plain | grep "memory-lancedb-pro"
+```
+
+Você deve ver:
+- `memory-lancedb-pro: smart extraction enabled`
+- `memory-lancedb-pro@...: plugin registered`
+
+Pronto! Seu agente agora tem memória de longo prazo.
+
+
+Mais caminhos de instalação (usuários existentes, upgrades)
+
+**Já está usando OpenClaw?**
+
+1. Adicione o plugin com um caminho **absoluto** em `plugins.load.paths`
+2. Vincule o slot de memória: `plugins.slots.memory = "memory-lancedb-pro"`
+3. Verifique: `openclaw plugins info memory-lancedb-pro && openclaw memory-pro stats`
+
+**Atualizando de versões anteriores ao v1.1.0?**
+
+```bash
+# 1) Backup
+openclaw memory-pro export --scope global --output memories-backup.json
+# 2) Dry run
+openclaw memory-pro upgrade --dry-run
+# 3) Run upgrade
+openclaw memory-pro upgrade
+# 4) Verify
+openclaw memory-pro stats
+```
+
+Veja `CHANGELOG-v1.1.0.md` para mudanças de comportamento e justificativa de upgrade.
+
+
+
+
+Importação rápida via Telegram Bot (clique para expandir)
+
+Se você está usando a integração Telegram do OpenClaw, a maneira mais fácil é enviar um comando de importação diretamente para o Bot principal em vez de editar a configuração manualmente.
+
+Envie esta mensagem:
+
+```text
+Help me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro
+
+Requirements:
+1. Set it as the only active memory plugin
+2. Use Jina for embedding
+3. Use Jina for reranker
+4. Use gpt-4o-mini for the smart-extraction LLM
+5. Enable autoCapture, autoRecall, smartExtraction
+6. extractMinMessages=2
+7. sessionMemory.enabled=false
+8. captureAssistant=false
+9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
+10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
+11. Generate the final openclaw.json config directly, not just an explanation
+```
+
+
+
+---
+
+## Ecossistema
+
+memory-lancedb-pro é o plugin principal. A comunidade construiu ferramentas ao redor dele para tornar a configuração e o uso diário ainda mais suaves:
+
+### Script de instalação — Instalação, atualização e reparo com um clique
+
+> **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**
+
+Não é apenas um instalador simples — o script lida inteligentemente com diversos cenários reais:
+
+| Sua situação | O que o script faz |
+|---|---|
+| Nunca instalou | Download → instalar dependências → escolher config → gravar em openclaw.json → reiniciar |
+| Instalado via `git clone`, preso em um commit antigo | `git fetch` + `checkout` automático para a versão mais recente → reinstalar dependências → verificar |
+| Config tem campos inválidos | Detecção automática via filtro de schema, remoção de campos não suportados |
+| Instalado via `npm` | Pula atualização git, lembra de executar `npm update` por conta própria |
+| CLI `openclaw` quebrado por config inválida | Fallback: ler caminho do workspace diretamente do arquivo `openclaw.json` |
+| `extensions/` em vez de `plugins/` | Detecção automática da localização do plugin a partir da config ou sistema de arquivos |
+| Já está atualizado | Executa apenas verificações de saúde, sem alterações |
+
+```bash
+bash setup-memory.sh # Install or upgrade
+bash setup-memory.sh --dry-run # Preview only
+bash setup-memory.sh --beta # Include pre-release versions
+bash setup-memory.sh --uninstall # Revert config and remove plugin
+```
+
+Presets de provedores integrados: **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, ou traga sua própria API compatível com OpenAI. Para uso completo (incluindo `--ref`, `--selfcheck-only` e mais), veja o [README do script de instalação](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup).
+
+### Claude Code / OpenClaw Skill — Configuração guiada por IA
+
+> **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**
+
+Instale esta Skill e seu agente de IA (Claude Code ou OpenClaw) ganha conhecimento profundo de todas as funcionalidades do memory-lancedb-pro. Basta dizer **"me ajude a ativar a melhor configuração"** e obtenha:
+
+- **Workflow de configuração guiado em 7 etapas** com 4 planos de implantação:
+ - Full Power (Jina + OpenAI) / Budget (reranker SiliconFlow gratuito) / Simple (apenas OpenAI) / Totalmente local (Ollama, custo API zero)
+- **Todas as 9 ferramentas MCP** usadas corretamente: `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(o toolkit completo requer `enableManagementTools: true` — a configuração padrão do Quick Start expõe as 4 ferramentas principais)*
+- **Prevenção de armadilhas comuns**: ativação de plugin workspace, `autoRecall` padrão false, cache jiti, variáveis de ambiente, isolamento de scope, etc.
+
+**Instalação para Claude Code:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro
+```
+
+**Instalação para OpenClaw:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill
+```
+
+---
+
+## Tutorial em vídeo
+
+> Guia completo: instalação, configuração e funcionamento interno da busca híbrida.
+
+[](https://youtu.be/MtukF1C8epQ)
+**https://youtu.be/MtukF1C8epQ**
+
+[](https://www.bilibili.com/video/BV1zUf2BGEgn/)
+**https://www.bilibili.com/video/BV1zUf2BGEgn/**
+
+---
+
+## Arquitetura
+
+```
+┌─────────────────────────────────────────────────────────┐
+│ index.ts (Entry Point) │
+│ Plugin Registration · Config Parsing · Lifecycle Hooks │
+└────────┬──────────┬──────────┬──────────┬───────────────┘
+ │ │ │ │
+ ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐
+ │ store │ │embedder│ │retriever│ │ scopes │
+ │ .ts │ │ .ts │ │ .ts │ │ .ts │
+ └────────┘ └────────┘ └────────┘ └─────────────┘
+ │ │
+ ┌────▼───┐ ┌─────▼──────────┐
+ │migrate │ │noise-filter.ts │
+ │ .ts │ │adaptive- │
+ └────────┘ │retrieval.ts │
+ └────────────────┘
+ ┌─────────────┐ ┌──────────┐
+ │ tools.ts │ │ cli.ts │
+ │ (Agent API) │ │ (CLI) │
+ └─────────────┘ └──────────┘
+```
+
+> Para um mergulho profundo na arquitetura completa, veja [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md).
+
+
+Referência de arquivos (clique para expandir)
+
+| Arquivo | Finalidade |
+| --- | --- |
+| `index.ts` | Ponto de entrada do plugin. Registra na API de Plugin do OpenClaw, analisa config, monta lifecycle hooks |
+| `openclaw.plugin.json` | Metadados do plugin + declaração completa de config via JSON Schema |
+| `cli.ts` | Comandos CLI: `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |
+| `src/store.ts` | Camada de armazenamento LanceDB. Criação de tabelas / Indexação FTS / Busca vetorial / Busca BM25 / CRUD |
+| `src/embedder.ts` | Abstração de embedding. Compatível com qualquer provedor de API compatível com OpenAI |
+| `src/retriever.ts` | Motor de busca híbrida. Vector + BM25 → Fusão Híbrida → Rerank → Decaimento do Ciclo de Vida → Filtro |
+| `src/scopes.ts` | Controle de acesso multi-scope |
+| `src/tools.ts` | Definições de ferramentas do agente: `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + ferramentas de gerenciamento |
+| `src/noise-filter.ts` | Filtra recusas do agente, meta-perguntas, saudações e conteúdo de baixa qualidade |
+| `src/adaptive-retrieval.ts` | Determina se uma consulta precisa de busca na memória |
+| `src/migrate.ts` | Migração do `memory-lancedb` integrado para o Pro |
+| `src/smart-extractor.ts` | Extração LLM em 6 categorias com armazenamento em camadas L0/L1/L2 e deduplicação em dois estágios |
+| `src/decay-engine.ts` | Modelo de decaimento exponencial esticado Weibull |
+| `src/tier-manager.ts` | Promoção/rebaixamento em três níveis: Peripheral ↔ Working ↔ Core |
+
+
+
+---
+
+## Funcionalidades principais
+
+### Busca híbrida
+
+```
+Query → embedQuery() ─┐
+ ├─→ Hybrid Fusion → Rerank → Lifecycle Decay Boost → Length Norm → Filter
+Query → BM25 FTS ─────┘
+```
+
+- **Busca vetorial** — similaridade semântica via LanceDB ANN (distância cosseno)
+- **Busca full-text BM25** — correspondência exata de palavras-chave via índice FTS do LanceDB
+- **Fusão híbrida** — pontuação vetorial como base, resultados BM25 recebem boost ponderado (não é RRF padrão — ajustado para qualidade de recall no mundo real)
+- **Pesos configuráveis** — `vectorWeight`, `bm25Weight`, `minScore`
+
+### Reranking Cross-Encoder
+
+- Adaptadores integrados para **Jina**, **SiliconFlow**, **Voyage AI** e **Pinecone**
+- Compatível com qualquer endpoint compatível com Jina (ex.: Hugging Face TEI, DashScope)
+- Pontuação híbrida: 60% cross-encoder + 40% pontuação fundida original
+- Degradação elegante: fallback para similaridade cosseno em caso de falha da API
+
+### Pipeline de pontuação multi-estágio
+
+| Estágio | Efeito |
+| --- | --- |
+| **Fusão híbrida** | Combina recall semântico e correspondência exata |
+| **Rerank Cross-Encoder** | Promove resultados semanticamente precisos |
+| **Boost de decaimento do ciclo de vida** | Frescor Weibull + frequência de acesso + importância × confiança |
+| **Normalização de comprimento** | Impede que entradas longas dominem (âncora: 500 caracteres) |
+| **Pontuação mínima rígida** | Remove resultados irrelevantes (padrão: 0.35) |
+| **Diversidade MMR** | Similaridade cosseno > 0.85 → rebaixado |
+
+### Extração inteligente de memória (v1.1.0)
+
+- **Extração LLM em 6 categorias**: perfil, preferências, entidades, eventos, casos, padrões
+- **Armazenamento em camadas L0/L1/L2**: L0 (índice em uma frase) → L1 (resumo estruturado) → L2 (narrativa completa)
+- **Deduplicação em dois estágios**: pré-filtro de similaridade vetorial (≥0.7) → decisão semântica LLM (CREATE/MERGE/SKIP)
+- **Fusão consciente de categorias**: `profile` sempre funde, `events`/`cases` apenas adicionam
+
+### Gerenciamento do ciclo de vida da memória (v1.1.0)
+
+- **Motor de decaimento Weibull**: pontuação composta = frescor + frequência + valor intrínseco
+- **Promoção em três níveis**: `Peripheral ↔ Working ↔ Core` com limiares configuráveis
+- **Reforço por acesso**: memórias recuperadas frequentemente decaem mais lentamente (estilo repetição espaçada)
+- **Meia-vida modulada pela importância**: memórias importantes decaem mais lentamente
+
+### Isolamento multi-scope
+
+- Scopes integrados: `global`, `agent:`, `custom:`, `project:`, `user:`
+- Controle de acesso no nível do agente via `scopes.agentAccess`
+- Padrão: cada agente acessa `global` + seu próprio scope `agent:`
+
+### Auto-Capture e Auto-Recall
+
+- **Auto-Capture** (`agent_end`): extrai preferências/fatos/decisões/entidades das conversas, deduplica, armazena até 3 por turno
+- **Auto-Recall** (`before_agent_start`): injeta contexto `` (até 3 entradas)
+
+### Filtragem de ruído e busca adaptativa
+
+- Filtra conteúdo de baixa qualidade: recusas do agente, meta-perguntas, saudações
+- Pula a busca para: saudações, comandos slash, confirmações simples, emoji
+- Força a busca para palavras-chave de memória ("lembra", "anteriormente", "da última vez")
+- Limiares CJK (chinês: 6 caracteres vs inglês: 15 caracteres)
+
+---
+
+
+Comparação com o memory-lancedb integrado (clique para expandir)
+
+| Funcionalidade | `memory-lancedb` integrado | **memory-lancedb-pro** |
+| --- | :---: | :---: |
+| Busca vetorial | Sim | Sim |
+| Busca full-text BM25 | - | Sim |
+| Fusão híbrida (Vector + BM25) | - | Sim |
+| Rerank cross-encoder (multi-provedor) | - | Sim |
+| Boost de frescor e decaimento temporal | - | Sim |
+| Normalização de comprimento | - | Sim |
+| Diversidade MMR | - | Sim |
+| Isolamento multi-scope | - | Sim |
+| Filtragem de ruído | - | Sim |
+| Busca adaptativa | - | Sim |
+| CLI de gerenciamento | - | Sim |
+| Memória de sessão | - | Sim |
+| Embeddings conscientes de tarefa | - | Sim |
+| **Extração inteligente LLM (6 categorias)** | - | Sim (v1.1.0) |
+| **Decaimento Weibull + Promoção de nível** | - | Sim (v1.1.0) |
+| Qualquer embedding compatível com OpenAI | Limitado | Sim |
+
+
+
+---
+
+## Configuração
+
+
+Exemplo de configuração completa
+
+```json
+{
+ "embedding": {
+ "apiKey": "${JINA_API_KEY}",
+ "model": "jina-embeddings-v5-text-small",
+ "baseURL": "https://api.jina.ai/v1",
+ "dimensions": 1024,
+ "taskQuery": "retrieval.query",
+ "taskPassage": "retrieval.passage",
+ "normalized": true
+ },
+ "dbPath": "~/.openclaw/memory/lancedb-pro",
+ "autoCapture": true,
+ "autoRecall": true,
+ "retrieval": {
+ "mode": "hybrid",
+ "vectorWeight": 0.7,
+ "bm25Weight": 0.3,
+ "minScore": 0.3,
+ "rerank": "cross-encoder",
+ "rerankApiKey": "${JINA_API_KEY}",
+ "rerankModel": "jina-reranker-v3",
+ "rerankEndpoint": "https://api.jina.ai/v1/rerank",
+ "rerankProvider": "jina",
+ "candidatePoolSize": 20,
+ "recencyHalfLifeDays": 14,
+ "recencyWeight": 0.1,
+ "filterNoise": true,
+ "lengthNormAnchor": 500,
+ "hardMinScore": 0.35,
+ "timeDecayHalfLifeDays": 60,
+ "reinforcementFactor": 0.5,
+ "maxHalfLifeMultiplier": 3
+ },
+ "enableManagementTools": false,
+ "scopes": {
+ "default": "global",
+ "definitions": {
+ "global": { "description": "Shared knowledge" },
+ "agent:discord-bot": { "description": "Discord bot private" }
+ },
+ "agentAccess": {
+ "discord-bot": ["global", "agent:discord-bot"]
+ }
+ },
+ "sessionMemory": {
+ "enabled": false,
+ "messageCount": 15
+ },
+ "smartExtraction": true,
+ "llm": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "gpt-4o-mini",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000
+}
+```
+
+
+
+
+Provedores de Embedding
+
+Funciona com **qualquer API de embedding compatível com OpenAI**:
+
+| Provedor | Modelo | Base URL | Dimensões |
+| --- | --- | --- | --- |
+| **Jina** (recomendado) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |
+| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |
+| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |
+| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |
+| **Ollama** (local) | `nomic-embed-text` | `http://localhost:11434/v1` | específico do provedor |
+
+
+
+
+Provedores de Rerank
+
+O reranking cross-encoder suporta múltiplos provedores via `rerankProvider`:
+
+| Provedor | `rerankProvider` | Modelo de exemplo |
+| --- | --- | --- |
+| **Jina** (padrão) | `jina` | `jina-reranker-v3` |
+| **SiliconFlow** (plano gratuito disponível) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |
+| **Voyage AI** | `voyage` | `rerank-2.5` |
+| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |
+
+Qualquer endpoint de rerank compatível com Jina também funciona — defina `rerankProvider: "jina"` e aponte `rerankEndpoint` para seu serviço (ex.: Hugging Face TEI, DashScope `qwen3-rerank`).
+
+
+
+
+Extração inteligente (LLM) — v1.1.0
+
+Quando `smartExtraction` está habilitado (padrão: `true`), o plugin usa um LLM para extrair e classificar memórias de forma inteligente em vez de gatilhos baseados em regex.
+
+| Campo | Tipo | Padrão | Descrição |
+|-------|------|--------|-----------|
+| `smartExtraction` | boolean | `true` | Habilitar/desabilitar extração LLM em 6 categorias |
+| `llm.auth` | string | `api-key` | `api-key` usa `llm.apiKey` / `embedding.apiKey`; `oauth` usa um arquivo de token OAuth com escopo de plugin por padrão |
+| `llm.apiKey` | string | *(fallback para `embedding.apiKey`)* | Chave de API para o provedor LLM |
+| `llm.model` | string | `openai/gpt-oss-120b` | Nome do modelo LLM |
+| `llm.baseURL` | string | *(fallback para `embedding.baseURL`)* | Endpoint da API LLM |
+| `llm.oauthProvider` | string | `openai-codex` | ID do provedor OAuth usado quando `llm.auth` é `oauth` |
+| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | Arquivo de token OAuth usado quando `llm.auth` é `oauth` |
+| `llm.timeoutMs` | number | `30000` | Timeout da requisição LLM em milissegundos |
+| `extractMinMessages` | number | `2` | Mensagens mínimas antes da extração ser acionada |
+| `extractMaxChars` | number | `8000` | Máximo de caracteres enviados ao LLM |
+
+
+Configuração `llm` com OAuth (usa cache de login existente do Codex / ChatGPT para chamadas LLM):
+```json
+{
+ "llm": {
+ "auth": "oauth",
+ "oauthProvider": "openai-codex",
+ "model": "gpt-5.4",
+ "oauthPath": "${HOME}/.openclaw/.memory-lancedb-pro/oauth.json",
+ "timeoutMs": 30000
+ }
+}
+```
+
+Notas para `llm.auth: "oauth"`:
+
+- `llm.oauthProvider` é atualmente `openai-codex`.
+- Tokens OAuth têm como padrão `~/.openclaw/.memory-lancedb-pro/oauth.json`.
+- Você pode definir `llm.oauthPath` se quiser armazenar esse arquivo em outro lugar.
+- `auth login` faz snapshot da configuração `llm` anterior (api-key) ao lado do arquivo OAuth, e `auth logout` restaura esse snapshot quando disponível.
+- Mudar de `api-key` para `oauth` não transfere automaticamente `llm.baseURL`. Defina-o manualmente no modo OAuth apenas quando você intencionalmente quiser um backend personalizado compatível com ChatGPT/Codex.
+
+
+
+
+Configuração do ciclo de vida (Decaimento + Nível)
+
+| Campo | Padrão | Descrição |
+|-------|--------|-----------|
+| `decay.recencyHalfLifeDays` | `30` | Meia-vida base para decaimento de frescor Weibull |
+| `decay.frequencyWeight` | `0.3` | Peso da frequência de acesso na pontuação composta |
+| `decay.intrinsicWeight` | `0.3` | Peso de `importance × confidence` |
+| `decay.betaCore` | `0.8` | Beta Weibull para memórias `core` |
+| `decay.betaWorking` | `1.0` | Beta Weibull para memórias `working` |
+| `decay.betaPeripheral` | `1.3` | Beta Weibull para memórias `peripheral` |
+| `tier.coreAccessThreshold` | `10` | Contagem mínima de recall antes de promover para `core` |
+| `tier.peripheralAgeDays` | `60` | Limiar de idade para rebaixar memórias inativas |
+
+
+
+
+Reforço por acesso
+
+Memórias recuperadas com frequência decaem mais lentamente (estilo repetição espaçada).
+
+Chaves de configuração (em `retrieval`):
+- `reinforcementFactor` (0-2, padrão: `0.5`) — defina `0` para desabilitar
+- `maxHalfLifeMultiplier` (1-10, padrão: `3`) — limite rígido na meia-vida efetiva
+
+
+
+---
+
+## Comandos CLI
+
+```bash
+openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]
+openclaw memory-pro search "query" [--scope global] [--limit 10] [--json]
+openclaw memory-pro stats [--scope global] [--json]
+openclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]
+openclaw memory-pro auth status
+openclaw memory-pro auth logout
+openclaw memory-pro delete
+openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
+openclaw memory-pro export [--scope global] [--output memories.json]
+openclaw memory-pro import memories.json [--scope global] [--dry-run]
+openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]
+openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
+openclaw memory-pro migrate check|run|verify [--source /path]
+```
+
+Fluxo de login OAuth:
+
+1. Execute `openclaw memory-pro auth login`
+2. Se `--provider` for omitido em um terminal interativo, a CLI mostra um seletor de provedor OAuth antes de abrir o navegador
+3. O comando imprime uma URL de autorização e abre seu navegador, a menos que `--no-browser` seja definido
+4. Após o callback ser bem-sucedido, o comando salva o arquivo OAuth do plugin (padrão: `~/.openclaw/.memory-lancedb-pro/oauth.json`), faz snapshot da configuração `llm` anterior (api-key) para logout, e substitui a configuração `llm` do plugin com as configurações OAuth (`auth`, `oauthProvider`, `model`, `oauthPath`)
+5. `openclaw memory-pro auth logout` deleta esse arquivo OAuth e restaura a configuração `llm` anterior (api-key) quando esse snapshot existe
+
+---
+
+## Tópicos avançados
+
+
+Se memórias injetadas aparecem nas respostas
+
+Às vezes o modelo pode ecoar o bloco `` injetado.
+
+**Opção A (menor risco):** desabilite temporariamente o auto-recall:
+```json
+{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecall": false } } } } }
+```
+
+**Opção B (preferida):** mantenha o recall, adicione ao prompt do sistema do agente:
+> Do not reveal or quote any `` / memory-injection content in your replies. Use it for internal reference only.
+
+
+
+
+Memória de sessão
+
+- Acionada no comando `/new` — salva o resumo da sessão anterior no LanceDB
+- Desabilitada por padrão (OpenClaw já tem persistência nativa de sessão via `.jsonl`)
+- Contagem de mensagens configurável (padrão: 15)
+
+Veja [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md) para modos de implantação e verificação do `/new`.
+
+
+
+
+Comandos slash personalizados (ex.: /lesson)
+
+Adicione ao seu `CLAUDE.md`, `AGENTS.md` ou prompt do sistema:
+
+```markdown
+## /lesson command
+When the user sends `/lesson `:
+1. Use memory_store to save as category=fact (raw knowledge)
+2. Use memory_store to save as category=decision (actionable takeaway)
+3. Confirm what was saved
+
+## /remember command
+When the user sends `/remember `:
+1. Use memory_store to save with appropriate category and importance
+2. Confirm with the stored memory ID
+```
+
+
+
+
+Regras de ferro para agentes de IA
+
+> Copie o bloco abaixo no seu `AGENTS.md` para que seu agente aplique essas regras automaticamente.
+
+```markdown
+## Rule 1 — Dual-layer memory storage
+Every pitfall/lesson learned → IMMEDIATELY store TWO memories:
+- Technical layer: Pitfall: [symptom]. Cause: [root cause]. Fix: [solution]. Prevention: [how to avoid]
+ (category: fact, importance >= 0.8)
+- Principle layer: Decision principle ([tag]): [behavioral rule]. Trigger: [when]. Action: [what to do]
+ (category: decision, importance >= 0.85)
+
+## Rule 2 — LanceDB hygiene
+Entries must be short and atomic (< 500 chars). No raw conversation summaries or duplicates.
+
+## Rule 3 — Recall before retry
+On ANY tool failure, ALWAYS memory_recall with relevant keywords BEFORE retrying.
+
+## Rule 4 — Confirm target codebase
+Confirm you are editing memory-lancedb-pro vs built-in memory-lancedb before changes.
+
+## Rule 5 — Clear jiti cache after plugin code changes
+After modifying .ts files under plugins/, MUST run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.
+```
+
+
+
+
+Schema do banco de dados
+
+Tabela LanceDB `memories`:
+
+| Campo | Tipo | Descrição |
+| --- | --- | --- |
+| `id` | string (UUID) | Chave primária |
+| `text` | string | Texto da memória (indexado FTS) |
+| `vector` | float[] | Vetor de embedding |
+| `category` | string | Categoria de armazenamento: `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |
+| `scope` | string | Identificador de scope (ex.: `global`, `agent:main`) |
+| `importance` | float | Pontuação de importância 0-1 |
+| `timestamp` | int64 | Timestamp de criação (ms) |
+| `metadata` | string (JSON) | Metadados estendidos |
+
+Chaves `metadata` comuns no v1.1.0: `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`
+
+> **Nota sobre categorias:** O campo `category` de nível superior usa 6 categorias de armazenamento. As 6 categorias semânticas da Extração Inteligente (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`) são armazenadas em `metadata.memory_category`.
+
+
+
+
+Solução de problemas
+
+### "Cannot mix BigInt and other types" (LanceDB / Apache Arrow)
+
+No LanceDB 0.26+, algumas colunas numéricas podem ser retornadas como `BigInt`. Atualize para **memory-lancedb-pro >= 1.0.14** — este plugin agora converte valores usando `Number(...)` antes de operações aritméticas.
+
+
+
+---
+
+## Documentação
+
+| Documento | Descrição |
+| --- | --- |
+| [Playbook de integração OpenClaw](docs/openclaw-integration-playbook.md) | Modos de implantação, verificação, matriz de regressão |
+| [Análise da arquitetura de memória](docs/memory_architecture_analysis.md) | Análise aprofundada da arquitetura completa |
+| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | Mudanças de comportamento v1.1.0 e justificativa de upgrade |
+| [Chunking de contexto longo](docs/long-context-chunking.md) | Estratégia de chunking para documentos longos |
+
+---
+
+## Beta: Smart Memory v1.1.0
+
+> Status: Beta — disponível via `npm i memory-lancedb-pro@beta`. Usuários estáveis no `latest` não são afetados.
+
+| Funcionalidade | Descrição |
+|---------|-------------|
+| **Extração inteligente** | Extração LLM em 6 categorias com metadados L0/L1/L2. Fallback para regex quando desabilitado. |
+| **Pontuação do ciclo de vida** | Decaimento Weibull integrado à busca — memórias frequentes e importantes ficam mais bem ranqueadas. |
+| **Gerenciamento de níveis** | Sistema de três níveis (Core → Working → Peripheral) com promoção/rebaixamento automático. |
+
+Feedback: [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · Reverter: `npm i memory-lancedb-pro@latest`
+
+---
+
+## Dependências
+
+| Pacote | Finalidade |
+| --- | --- |
+| `@lancedb/lancedb` ≥0.26.2 | Banco de dados vetorial (ANN + FTS) |
+| `openai` ≥6.21.0 | Cliente de API de Embedding compatível com OpenAI |
+| `@sinclair/typebox` 0.34.48 | Definições de tipo JSON Schema |
+
+---
+
+## Contributors
+
+
+
+
+
+
+
+
+
+
+
+
+
+Full list: [Contributors](https://github.com/CortexReach/memory-lancedb-pro/graphs/contributors)
+
+## Star History
+
+
+
+
+
+
+
+
+
+## Licença
+
+MIT
+
+---
+
+## Meu QR Code WeChat
+
+
diff --git a/README_RU.md b/README_RU.md
new file mode 100644
index 00000000..8fcb1031
--- /dev/null
+++ b/README_RU.md
@@ -0,0 +1,773 @@
+
+
+# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin
+
+**ИИ-ассистент памяти для агентов [OpenClaw](https://github.com/openclaw/openclaw)**
+
+*Дайте вашему ИИ-агенту мозг, который действительно помнит: между сессиями, между агентами и с течением времени.*
+
+Плагин долгосрочной памяти для OpenClaw на базе LanceDB, который сохраняет предпочтения, решения и контекст проекта, а затем автоматически вспоминает их в будущих сессиях.
+
+[](https://github.com/openclaw/openclaw)
+[](https://www.npmjs.com/package/memory-lancedb-pro)
+[](https://lancedb.com)
+[](LICENSE)
+
+[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)
+
+
+
+---
+
+## Почему memory-lancedb-pro?
+
+Большинство ИИ-агентов страдают амнезией. Они забывают все, как только вы начинаете новый чат.
+
+**memory-lancedb-pro** — это production-grade плагин долгосрочной памяти для OpenClaw, который превращает вашего агента в настоящего **ИИ-ассистента памяти**. Он автоматически фиксирует важное, позволяет шуму естественно угасать и поднимает нужное воспоминание в нужный момент. Никаких ручных тегов, никаких мучений с конфигурацией.
+
+### Как это выглядит на практике
+
+**Без памяти: каждая сессия начинается с нуля**
+
+> **Вы:** "Используй табы для отступов и всегда добавляй обработку ошибок."
+> *(следующая сессия)*
+> **Вы:** "Я же уже говорил: табы, а не пробелы!" 😤
+> *(еще одна сессия)*
+> **Вы:** "...серьезно, табы. И обработка ошибок. Снова."
+
+**С memory-lancedb-pro агент учится и помнит**
+
+> **Вы:** "Используй табы для отступов и всегда добавляй обработку ошибок."
+> *(следующая сессия: агент автоматически вспоминает ваши предпочтения)*
+> **Агент:** *(молча применяет табы + обработку ошибок)* ✅
+> **Вы:** "Почему в прошлом месяце мы выбрали PostgreSQL, а не MongoDB?"
+> **Агент:** "Судя по нашему обсуждению 12 февраля, основные причины были..." ✅
+
+В этом и есть разница: **ИИ-ассистент памяти** изучает ваш стиль, вспоминает прошлые решения и дает персонализированные ответы без необходимости повторять одно и то же.
+
+### Что еще он умеет?
+
+| | Что вы получаете |
+|---|---|
+| **Автозахват** | Агент учится на каждом разговоре, без ручного `memory_store` |
+| **Умное извлечение** | Классификация на основе LLM по 6 категориям: профили, предпочтения, сущности, события, кейсы, паттерны |
+| **Интеллектуальное забывание** | Модель затухания Weibull: важные воспоминания остаются, шум естественно исчезает |
+| **Гибридный поиск** | Векторный поиск + полнотекстовый BM25 с объединением и cross-encoder rerank |
+| **Инъекция контекста** | Релевантные воспоминания автоматически подаются перед каждым ответом |
+| **Изоляция областей памяти** | Границы памяти на уровне агента, пользователя и проекта |
+| **Любой провайдер** | OpenAI, Jina, Gemini, Ollama или любой OpenAI-compatible API |
+| **Полный набор инструментов** | CLI, backup, migration, upgrade, export/import — готово к продакшену |
+
+---
+
+## Быстрый старт
+
+### Вариант A: скрипт установки в один клик (рекомендуется)
+
+Поддерживаемый сообществом **[скрипт установки](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)** берет на себя установку, обновление и восстановление одной командой:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
+bash setup-memory.sh
+```
+
+> Полный список сценариев, которые покрывает скрипт, и другие инструменты сообщества смотрите ниже в разделе [Экосистема](#экосистема).
+
+### Вариант B: ручная установка
+
+**Через OpenClaw CLI (рекомендуется):**
+```bash
+openclaw plugins install memory-lancedb-pro@beta
+```
+
+**Или через npm:**
+```bash
+npm i memory-lancedb-pro@beta
+```
+> Если используете npm, вам также нужно добавить директорию установки плагина как **абсолютный** путь в `plugins.load.paths` вашего `openclaw.json`. Это самая частая проблема при настройке.
+
+Добавьте в `openclaw.json`:
+
+```json
+{
+ "plugins": {
+ "slots": { "memory": "memory-lancedb-pro" },
+ "entries": {
+ "memory-lancedb-pro": {
+ "enabled": true,
+ "config": {
+ "embedding": {
+ "provider": "openai-compatible",
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "text-embedding-3-small"
+ },
+ "autoCapture": true,
+ "autoRecall": true,
+ "smartExtraction": true,
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000,
+ "sessionMemory": { "enabled": false }
+ }
+ }
+ }
+ }
+}
+```
+
+**Почему именно такие значения по умолчанию?**
+- `autoCapture` + `smartExtraction` → агент автоматически учится на каждом разговоре
+- `autoRecall` → релевантные воспоминания подставляются перед каждым ответом
+- `extractMinMessages: 2` → извлечение срабатывает в обычном двухходовом диалоге
+- `sessionMemory.enabled: false` → поиск не засоряется сводками сессий с первого дня
+
+Проверьте и перезапустите:
+
+```bash
+openclaw config validate
+openclaw gateway restart
+openclaw logs --follow --plain | grep "memory-lancedb-pro"
+```
+
+Вы должны увидеть:
+- `memory-lancedb-pro: smart extraction enabled`
+- `memory-lancedb-pro@...: plugin registered`
+
+Готово. Теперь у вашего агента есть долгосрочная память.
+
+
+Дополнительные варианты установки (для действующих пользователей и апгрейдов)
+
+**Уже используете OpenClaw?**
+
+1. Добавьте плагин в `plugins.load.paths` как **абсолютный** путь
+2. Привяжите memory slot: `plugins.slots.memory = "memory-lancedb-pro"`
+3. Проверьте: `openclaw plugins info memory-lancedb-pro && openclaw memory-pro stats`
+
+**Обновляетесь с версии до v1.1.0?**
+
+```bash
+# 1) Резервная копия
+openclaw memory-pro export --scope global --output memories-backup.json
+# 2) Пробный запуск
+openclaw memory-pro upgrade --dry-run
+# 3) Выполнить апгрейд
+openclaw memory-pro upgrade
+# 4) Проверка
+openclaw memory-pro stats
+```
+
+Изменения поведения и причины апгрейда описаны в `CHANGELOG-v1.1.0.md`.
+
+
+
+
+Быстрый импорт для Telegram Bot (нажмите, чтобы раскрыть)
+
+Если вы используете Telegram-интеграцию OpenClaw, самый простой путь — отправить команду импорта прямо основному боту вместо ручного редактирования конфига.
+
+Отправьте такое сообщение:
+
+```text
+Help me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro
+
+Requirements:
+1. Set it as the only active memory plugin
+2. Use Jina for embedding
+3. Use Jina for reranker
+4. Use gpt-4o-mini for the smart-extraction LLM
+5. Enable autoCapture, autoRecall, smartExtraction
+6. extractMinMessages=2
+7. sessionMemory.enabled=false
+8. captureAssistant=false
+9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
+10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
+11. Generate the final openclaw.json config directly, not just an explanation
+```
+
+
+
+---
+
+## Экосистема
+
+memory-lancedb-pro — это основной плагин. Сообщество построило вокруг него инструменты, чтобы установка и ежедневная работа были еще проще.
+
+### Скрипт установки: установка, апгрейд и ремонт в один клик
+
+> **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**
+
+Это не просто установщик: скрипт грамотно обрабатывает широкий набор реальных сценариев.
+
+| Ваша ситуация | Что делает скрипт |
+|---|---|
+| Никогда не устанавливали | Скачивает заново → ставит зависимости → помогает выбрать конфиг → записывает в `openclaw.json` → перезапускает |
+| Установлено через `git clone`, но застряли на старом коммите | Автоматически делает `git fetch` + `checkout` на актуальную версию → переустанавливает зависимости → проверяет |
+| В конфиге есть невалидные поля | Автоматически находит их через schema filter и удаляет неподдерживаемые значения |
+| Установлено через `npm` | Пропускает git-обновление и напоминает вручную запустить `npm update` |
+| `openclaw` CLI сломан из-за невалидного конфига | Фолбэк: читает путь workspace напрямую из файла `openclaw.json` |
+| Используется `extensions/`, а не `plugins/` | Автоматически определяет расположение плагина по конфигу или файловой системе |
+| Уже актуальная версия | Запускает только health checks, без изменений |
+
+```bash
+bash setup-memory.sh # Установить или обновить
+bash setup-memory.sh --dry-run # Только предпросмотр
+bash setup-memory.sh --beta # Включить pre-release версии
+bash setup-memory.sh --uninstall # Откатить конфиг и удалить плагин
+```
+
+Встроенные пресеты провайдеров: **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, либо любой собственный OpenAI-compatible API. Полное использование (включая `--ref`, `--selfcheck-only` и другое) смотрите в [README скрипта установки](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup).
+
+### Навык Claude Code / OpenClaw: настройка под управлением ИИ
+
+> **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**
+
+Установите этот навык, и ваш ИИ-агент (Claude Code или OpenClaw) получит глубокое знание всех возможностей memory-lancedb-pro. Достаточно сказать **"help me enable the best config"**, и вы получите:
+
+- **Пошаговый процесс настройки из 7 шагов** с 4 вариантами деплоя:
+ - Полная мощность (Jina + OpenAI) / Экономный (бесплатный reranker от SiliconFlow) / Простой (только OpenAI) / Полностью локальный (Ollama, нулевая стоимость API)
+- **Корректное использование всех 9 инструментов MCP**: `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(полный набор доступен при `enableManagementTools: true` — стандартный Quick Start открывает только 4 базовых инструмента)*
+- **Защиту от типичных ошибок**: включение плагина в workspace, `autoRecall` со значением false по умолчанию, кэш jiti, переменные окружения, изоляция областей памяти и другое
+
+**Установка для Claude Code:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro
+```
+
+**Установка для OpenClaw:**
+```bash
+git clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill
+```
+
+---
+
+## Видеоруководство
+
+> Полный разбор: установка, настройка и внутреннее устройство гибридного поиска.
+
+[](https://youtu.be/MtukF1C8epQ)
+**https://youtu.be/MtukF1C8epQ**
+
+[](https://www.bilibili.com/video/BV1zUf2BGEgn/)
+**https://www.bilibili.com/video/BV1zUf2BGEgn/**
+
+---
+
+## Архитектура
+
+```
+┌─────────────────────────────────────────────────────────┐
+│ index.ts (Entry Point) │
+│ Plugin Registration · Config Parsing · Lifecycle Hooks │
+└────────┬──────────┬──────────┬──────────┬───────────────┘
+ │ │ │ │
+ ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐
+ │ store │ │embedder│ │retriever│ │ scopes │
+ │ .ts │ │ .ts │ │ .ts │ │ .ts │
+ └────────┘ └────────┘ └────────┘ └─────────────┘
+ │ │
+ ┌────▼───┐ ┌─────▼──────────┐
+ │migrate │ │noise-filter.ts │
+ │ .ts │ │adaptive- │
+ └────────┘ │retrieval.ts │
+ └────────────────┘
+ ┌─────────────┐ ┌──────────┐
+ │ tools.ts │ │ cli.ts │
+ │ (Agent API) │ │ (CLI) │
+ └─────────────┘ └──────────┘
+```
+
+> Для глубокого разбора полной архитектуры смотрите [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md).
+
+
+Справочник по файлам (нажмите, чтобы раскрыть)
+
+| Файл | Назначение |
+| --- | --- |
+| `index.ts` | Точка входа плагина. Регистрация в API плагинов OpenClaw, разбор конфига, подключение хуков жизненного цикла |
+| `openclaw.plugin.json` | Метаданные плагина + полная декларация JSON Schema для конфига |
+| `cli.ts` | CLI-команды: `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |
+| `src/store.ts` | Слой хранения LanceDB. Создание таблиц / FTS-индекс / векторный поиск / BM25-поиск / CRUD |
+| `src/embedder.ts` | Абстракция эмбеддингов. Совместима с любым провайдером OpenAI-compatible API |
+| `src/retriever.ts` | Движок гибридного поиска. Векторный поиск + BM25 → гибридное объединение → реранжирование → затухание жизненного цикла → фильтрация |
+| `src/scopes.ts` | Контроль доступа для нескольких областей памяти |
+| `src/tools.ts` | Определения инструментов агента: `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + административные инструменты |
+| `src/noise-filter.ts` | Фильтрует отказы агента, мета-вопросы, приветствия и низкокачественный контент |
+| `src/adaptive-retrieval.ts` | Определяет, нужен ли конкретному запросу поиск по памяти |
+| `src/migrate.ts` | Миграция со встроенного `memory-lancedb` на Pro |
+| `src/smart-extractor.ts` | Извлечение по 6 категориям на базе LLM с многослойным хранением L0/L1/L2 и двухэтапной дедупликацией |
+| `src/decay-engine.ts` | Модель растянутого экспоненциального затухания Weibull |
+| `src/tier-manager.ts` | Трехуровневое продвижение/понижение: Peripheral ↔ Working ↔ Core |
+
+
+
+---
+
+## Ключевые возможности
+
+### Гибридный поиск
+
+```
+Query → embedQuery() ─┐
+ ├─→ Hybrid Fusion → Rerank → Lifecycle Decay Boost → Length Norm → Filter
+Query → BM25 FTS ─────┘
+```
+
+- **Векторный поиск** — семантическая близость через LanceDB ANN (cosine distance)
+- **Полнотекстовый BM25** — точное совпадение по ключевым словам через LanceDB FTS index
+- **Hybrid Fusion** — векторный score служит базой, а BM25-попадания получают взвешенный буст (это не стандартный RRF, а вариант, настроенный под качество реального recall)
+- **Настраиваемые веса** — `vectorWeight`, `bm25Weight`, `minScore`
+
+### Кросс-энкодерное реранжирование
+
+- Встроенные адаптеры для **Jina**, **SiliconFlow**, **Voyage AI** и **Pinecone**
+- Совместимо с любым Jina-compatible endpoint (например, Hugging Face TEI, DashScope)
+- Гибридный скоринг: 60% cross-encoder + 40% исходный fused score
+- Graceful degradation: при сбое API откатывается к cosine similarity
+
+### Многоэтапный пайплайн скоринга
+
+| Этап | Эффект |
+| --- | --- |
+| **Hybrid Fusion** | Комбинирует семантический recall и точное совпадение |
+| **Cross-Encoder Rerank** | Продвигает семантически точные попадания |
+| **Lifecycle Decay Boost** | Свежесть по Weibull + частота доступа + важность × уверенность |
+| **Length Normalization** | Не дает длинным записям доминировать (anchor: 500 chars) |
+| **Hard Min Score** | Убирает нерелевантные результаты (по умолчанию: 0.35) |
+| **MMR Diversity** | Cosine similarity > 0.85 → понижается |
+
+### Умное извлечение памяти (v1.1.0)
+
+- **LLM-powered извлечение по 6 категориям**: profile, preferences, entities, events, cases, patterns
+- **Многослойное хранение L0/L1/L2**: L0 (одно предложение-индекс) → L1 (структурированное summary) → L2 (полный narrative)
+- **Двухэтапная дедупликация**: предварительный фильтр по векторному сходству (≥0.7) → LLM-решение по смыслу (CREATE/MERGE/SKIP)
+- **Слияние с учетом категории**: `profile` всегда merge, `events` и `cases` добавляются append-only
+
+### Управление жизненным циклом памяти (v1.1.0)
+
+- **Weibull Decay Engine**: composite score = recency + frequency + intrinsic value
+- **Трехуровневое продвижение**: `Peripheral ↔ Working ↔ Core` с настраиваемыми порогами
+- **Усиление при доступе**: часто вспоминаемые записи затухают медленнее (в духе spaced repetition)
+- **Half-life с учетом важности**: важные воспоминания живут дольше
+
+### Изоляция между областями памяти
+
+- Встроенные области памяти: `global`, `agent:`, `custom:`, `project:`, `user:`
+- Контроль доступа агента через `scopes.agentAccess`
+- По умолчанию каждый агент видит `global` + собственную область `agent:`
+
+### Auto-Capture и Auto-Recall
+
+- **Auto-Capture** (`agent_end`): извлекает preference/fact/decision/entity из диалога, дедуплицирует и сохраняет до 3 записей за ход
+- **Auto-Recall** (`before_agent_start`): внедряет контекст `` (до 3 записей)
+
+### Фильтрация шума и адаптивный поиск по памяти
+
+- Фильтрует низкокачественный контент: отказы агента, мета-вопросы, приветствия
+- Пропускает поиск по памяти для приветствий, slash-команд, простых подтверждений и emoji
+- Принудительно включает поиск по памяти по ключевым словам ("remember", "previously", "last time")
+- Пороги с учетом CJK (китайский: 6 символов против английского: 15 символов)
+
+---
+
+
+Сравнение со встроенным memory-lancedb (нажмите, чтобы раскрыть)
+
+| Возможность | Встроенный `memory-lancedb` | **memory-lancedb-pro** |
+| --- | :---: | :---: |
+| Векторный поиск | Yes | Yes |
+| Полнотекстовый BM25 | - | Yes |
+| Гибридное объединение (Vector + BM25) | - | Yes |
+| Реранжирование cross-encoder (несколько провайдеров) | - | Yes |
+| Буст по свежести и затухание во времени | - | Yes |
+| Нормализация по длине | - | Yes |
+| MMR-диверсификация | - | Yes |
+| Изоляция областей памяти | - | Yes |
+| Фильтрация шума | - | Yes |
+| Адаптивный поиск по памяти | - | Yes |
+| Административный CLI | - | Yes |
+| Память сессий | - | Yes |
+| Эмбеддинги с учетом задачи | - | Yes |
+| **Умное извлечение LLM (6 категорий)** | - | Yes (v1.1.0) |
+| **Затухание Weibull + продвижение по уровням** | - | Yes (v1.1.0) |
+| Любые OpenAI-compatible эмбеддинги | Limited | Yes |
+
+
+
+---
+
+## Конфигурация
+
+
+Полный пример конфигурации
+
+```json
+{
+ "embedding": {
+ "apiKey": "${JINA_API_KEY}",
+ "model": "jina-embeddings-v5-text-small",
+ "baseURL": "https://api.jina.ai/v1",
+ "dimensions": 1024,
+ "taskQuery": "retrieval.query",
+ "taskPassage": "retrieval.passage",
+ "normalized": true
+ },
+ "dbPath": "~/.openclaw/memory/lancedb-pro",
+ "autoCapture": true,
+ "autoRecall": true,
+ "retrieval": {
+ "mode": "hybrid",
+ "vectorWeight": 0.7,
+ "bm25Weight": 0.3,
+ "minScore": 0.3,
+ "rerank": "cross-encoder",
+ "rerankApiKey": "${JINA_API_KEY}",
+ "rerankModel": "jina-reranker-v3",
+ "rerankEndpoint": "https://api.jina.ai/v1/rerank",
+ "rerankProvider": "jina",
+ "candidatePoolSize": 20,
+ "recencyHalfLifeDays": 14,
+ "recencyWeight": 0.1,
+ "filterNoise": true,
+ "lengthNormAnchor": 500,
+ "hardMinScore": 0.35,
+ "timeDecayHalfLifeDays": 60,
+ "reinforcementFactor": 0.5,
+ "maxHalfLifeMultiplier": 3
+ },
+ "enableManagementTools": false,
+ "scopes": {
+ "default": "global",
+ "definitions": {
+ "global": { "description": "Shared knowledge" },
+ "agent:discord-bot": { "description": "Discord bot private" }
+ },
+ "agentAccess": {
+ "discord-bot": ["global", "agent:discord-bot"]
+ }
+ },
+ "sessionMemory": {
+ "enabled": false,
+ "messageCount": 15
+ },
+ "smartExtraction": true,
+ "llm": {
+ "apiKey": "${OPENAI_API_KEY}",
+ "model": "gpt-4o-mini",
+ "baseURL": "https://api.openai.com/v1"
+ },
+ "extractMinMessages": 2,
+ "extractMaxChars": 8000
+}
+```
+
+
+
+
+Провайдеры эмбеддингов
+
+Работает с **любым OpenAI-compatible API для эмбеддингов**:
+
+| Provider | Model | Base URL | Dimensions |
+| --- | --- | --- | --- |
+| **Jina** (recommended) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |
+| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |
+| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |
+| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |
+| **Ollama** (local) | `nomic-embed-text` | `http://localhost:11434/v1` | зависит от провайдера |
+
+
+
+
+Провайдеры реранжирования
+
+Кросс-энкодерное реранжирование поддерживает несколько провайдеров через `rerankProvider`:
+
+| Provider | `rerankProvider` | Example Model |
+| --- | --- | --- |
+| **Jina** (default) | `jina` | `jina-reranker-v3` |
+| **SiliconFlow** (есть бесплатный тариф) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |
+| **Voyage AI** | `voyage` | `rerank-2.5` |
+| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |
+
+Подойдет и любой Jina-compatible rerank endpoint: задайте `rerankProvider: "jina"` и укажите ваш `rerankEndpoint` (например, Hugging Face TEI, DashScope `qwen3-rerank`).
+
+
+
+
+Smart Extraction (LLM) — v1.1.0
+
+Когда включен `smartExtraction` (по умолчанию: `true`), плагин использует LLM для интеллектуального извлечения и классификации воспоминаний вместо правил на регулярных выражениях.
+
+| Поле | Тип | По умолчанию | Описание |
+|-------|------|---------|-------------|
+| `smartExtraction` | boolean | `true` | Включить/выключить извлечение по 6 категориям на базе LLM |
+| `llm.auth` | string | `api-key` | `api-key` использует `llm.apiKey` / `embedding.apiKey`; `oauth` по умолчанию использует OAuth-файл токена в области плагина |
+| `llm.apiKey` | string | *(по умолчанию берется из `embedding.apiKey`)* | API-ключ провайдера LLM |
+| `llm.model` | string | `openai/gpt-oss-120b` | Имя модели LLM |
+| `llm.baseURL` | string | *(по умолчанию берется из `embedding.baseURL`)* | URL LLM API |
+| `llm.oauthProvider` | string | `openai-codex` | Идентификатор OAuth-провайдера, используемый при `llm.auth = "oauth"` |
+| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | Путь к OAuth-файлу токена при `llm.auth = "oauth"` |
+| `llm.timeoutMs` | number | `30000` | Таймаут запроса к LLM в миллисекундах |
+| `extractMinMessages` | number | `2` | Минимум сообщений до срабатывания извлечения |
+| `extractMaxChars` | number | `8000` | Максимум символов, отправляемых в LLM |
+
+
+OAuth `llm` config (использует существующий кэш логина Codex / ChatGPT для LLM-запросов):
+```json
+{
+ "llm": {
+ "auth": "oauth",
+ "oauthProvider": "openai-codex",
+ "model": "gpt-5.4",
+ "oauthPath": "${HOME}/.openclaw/.memory-lancedb-pro/oauth.json",
+ "timeoutMs": 30000
+ }
+}
+```
+
+Примечания для `llm.auth: "oauth"`:
+
+- `llm.oauthProvider` сейчас равен `openai-codex`.
+- По умолчанию OAuth token хранится в `~/.openclaw/.memory-lancedb-pro/oauth.json`.
+- Если хотите хранить этот файл в другом месте, можно задать `llm.oauthPath`.
+- `auth login` сохраняет снимок предыдущего `llm` конфига в режиме api-key рядом с OAuth-файлом, а `auth logout` восстанавливает этот снимок, если он доступен.
+- При переключении с `api-key` на `oauth` значение `llm.baseURL` автоматически не переносится. Указывайте его вручную в OAuth-режиме только если вам действительно нужен кастомный ChatGPT/Codex-compatible backend.
+
+
+
+
+Конфигурация жизненного цикла (Decay + Tier)
+
+| Поле | По умолчанию | Описание |
+|-------|---------|-------------|
+| `decay.recencyHalfLifeDays` | `30` | Базовый период полураспада для Weibull recency decay |
+| `decay.frequencyWeight` | `0.3` | Вес частоты доступа в composite score |
+| `decay.intrinsicWeight` | `0.3` | Вес `importance × confidence` |
+| `decay.betaCore` | `0.8` | Weibull beta для воспоминаний уровня `core` |
+| `decay.betaWorking` | `1.0` | Weibull beta для `working` |
+| `decay.betaPeripheral` | `1.3` | Weibull beta для `peripheral` |
+| `tier.coreAccessThreshold` | `10` | Минимальное число recall перед повышением в `core` |
+| `tier.peripheralAgeDays` | `60` | Порог возраста для понижения устаревших воспоминаний |
+
+
+
+
+Усиление за счет доступа
+
+Часто вспоминаемые записи затухают медленнее (в духе spaced repetition).
+
+Ключи конфига (в разделе `retrieval`):
+- `reinforcementFactor` (0-2, по умолчанию: `0.5`) — задайте `0`, чтобы отключить
+- `maxHalfLifeMultiplier` (1-10, по умолчанию: `3`) — жесткий потолок эффективного периода полураспада
+
+
+
+---
+
+## CLI-команды
+
+```bash
+openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]
+openclaw memory-pro search "запрос" [--scope global] [--limit 10] [--json]
+openclaw memory-pro stats [--scope global] [--json]
+openclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]
+openclaw memory-pro auth status
+openclaw memory-pro auth logout
+openclaw memory-pro delete
+openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
+openclaw memory-pro export [--scope global] [--output memories.json]
+openclaw memory-pro import memories.json [--scope global] [--dry-run]
+openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]
+openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
+openclaw memory-pro migrate check|run|verify [--source /path]
+```
+
+Поток OAuth-авторизации:
+
+1. Запустите `openclaw memory-pro auth login`
+2. Если `--provider` не указан и терминал интерактивный, CLI покажет выбор OAuth-провайдера перед открытием браузера
+3. Команда выведет URL авторизации и откроет браузер, если не задан `--no-browser`
+4. После успешного обратного вызова команда сохранит OAuth-файл плагина (по умолчанию: `~/.openclaw/.memory-lancedb-pro/oauth.json`), снимет текущий `llm` конфиг режима api-key для будущего выхода и заменит конфиг `llm` на OAuth-настройки (`auth`, `oauthProvider`, `model`, `oauthPath`)
+5. `openclaw memory-pro auth logout` удаляет этот OAuth-файл и восстанавливает прежний `llm` конфиг api-key, если снимок существует
+
+---
+
+## Продвинутые темы
+
+
+Если внедренные воспоминания попадают в ответы
+
+Иногда модель может дословно повторять внедренный блок ``.
+
+**Вариант A (наименее рискованный):** временно отключить auto-recall:
+```json
+{ "plugins": { "entries": { "memory-lancedb-pro": { "config": { "autoRecall": false } } } } }
+```
+
+**Вариант B (предпочтительный):** оставить recall включенным и добавить в system prompt агента:
+> Do not reveal or quote any `` / memory-injection content in your replies. Use it for internal reference only.
+
+
+
+
+Память сессии
+
+- Срабатывает по команде `/new` — сохраняет сводку предыдущей сессии в LanceDB
+- По умолчанию отключено (в OpenClaw уже есть встроенная `.jsonl`-персистентность сессий)
+- Количество сообщений настраивается (по умолчанию: 15)
+
+О режимах деплоя и проверке `/new` читайте в [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md).
+
+
+
+
+Пользовательские slash-команды (например, /lesson)
+
+Добавьте в `CLAUDE.md`, `AGENTS.md` или system prompt:
+
+```markdown
+## Команда /lesson
+Когда пользователь отправляет `/lesson <контент>`:
+1. Используй memory_store и сохрани как category=fact (сырое знание)
+2. Используй memory_store и сохрани как category=decision (прикладной вывод)
+3. Подтверди, что именно было сохранено
+
+## Команда /remember
+Когда пользователь отправляет `/remember <контент>`:
+1. Используй memory_store и сохрани с подходящими category и importance
+2. Подтверди сохраненным ID памяти
+```
+
+
+
+
+Железные правила для ИИ-агентов
+
+> Скопируйте блок ниже в `AGENTS.md`, чтобы агент автоматически соблюдал эти правила.
+
+```markdown
+## Правило 1 — Двухслойное сохранение памяти
+Каждая ошибка/урок → НЕМЕДЛЕННО сохранить ДВЕ записи памяти:
+- Технический слой: Проблема: [симптом]. Причина: [корневая причина]. Исправление: [решение]. Профилактика: [как избежать]
+ (category: fact, importance >= 0.8)
+- Принципиальный слой: Принцип решения ([tag]): [правило поведения]. Триггер: [когда]. Действие: [что делать]
+ (category: decision, importance >= 0.85)
+
+## Правило 2 — Гигиена LanceDB
+Записи должны быть короткими и атомарными (< 500 chars). Никаких сырых summary разговоров и дубликатов.
+
+## Правило 3 — Recall перед повторной попыткой
+При ЛЮБОЙ ошибке инструмента ВСЕГДА выполняй memory_recall по релевантным ключевым словам ПЕРЕД повторной попыткой.
+
+## Правило 4 — Подтверди целевую кодовую базу
+Перед изменениями убедись, что редактируешь memory-lancedb-pro, а не встроенный memory-lancedb.
+
+## Правило 5 — Очищай кэш jiti после изменений кода плагина
+После изменения .ts-файлов в plugins/ ОБЯЗАТЕЛЬНО выполни rm -rf /tmp/jiti/ перед openclaw gateway restart.
+```
+
+
+
+
+Схема базы данных
+
+Таблица LanceDB `memories`:
+
+| Поле | Тип | Описание |
+| --- | --- | --- |
+| `id` | string (UUID) | Первичный ключ |
+| `text` | string | Текст памяти (индексируется для FTS) |
+| `vector` | float[] | Вектор эмбеддинга |
+| `category` | string | Категория хранения: `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |
+| `scope` | string | Идентификатор области памяти (например, `global`, `agent:main`) |
+| `importance` | float | Оценка важности от 0 до 1 |
+| `timestamp` | int64 | Временная метка создания (мс) |
+| `metadata` | string (JSON) | Расширенные метаданные |
+
+Обычные ключи `metadata` в v1.1.0: `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`
+
+> **Примечание о категориях:** поле верхнего уровня `category` использует 6 storage categories. Семантические метки Smart Extraction (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`) сохраняются в `metadata.memory_category`.
+
+
+
+
+Устранение неполадок
+
+### "Cannot mix BigInt and other types" (LanceDB / Apache Arrow)
+
+Начиная с LanceDB 0.26+, некоторые числовые колонки могут возвращаться как `BigInt`. Обновитесь до **memory-lancedb-pro >= 1.0.14**: теперь плагин приводит такие значения через `Number(...)` перед арифметикой.
+
+
+
+---
+
+## Документация
+
+| Документ | Описание |
+| --- | --- |
+| [OpenClaw Integration Playbook](docs/openclaw-integration-playbook.md) | Режимы деплоя, проверка, матрица регрессии |
+| [Memory Architecture Analysis](docs/memory_architecture_analysis.md) | Глубокий разбор полной архитектуры |
+| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | Изменения поведения в v1.1.0 и причины апгрейда |
+| [Long-Context Chunking](docs/long-context-chunking.md) | Стратегия разбиения длинных документов |
+
+---
+
+## Бета: Smart Memory v1.1.0
+
+> Статус: Beta — доступно через `npm i memory-lancedb-pro@beta`. Пользователи стабильного `latest` не затронуты.
+
+| Возможность | Описание |
+|---------|-------------|
+| **Умное извлечение** | Извлечение по 6 категориям на базе LLM с метаданными L0/L1/L2. При отключении откатывается к регулярным правилам. |
+| **Оценка жизненного цикла** | Затухание Weibull встроено в поиск по памяти: записи с высокой частотой и важностью ранжируются выше. |
+| **Управление уровнями** | Трехуровневая система (Core → Working → Peripheral) с автоматическим повышением и понижением. |
+
+Обратная связь: [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · Откат: `npm i memory-lancedb-pro@latest`
+
+---
+
+## Зависимости
+
+| Пакет | Назначение |
+| --- | --- |
+| `@lancedb/lancedb` ≥0.26.2 | Векторная база данных (ANN + FTS) |
+| `openai` ≥6.21.0 | Клиент OpenAI-compatible Embedding API |
+| `@sinclair/typebox` 0.34.48 | Определения типов для JSON Schema |
+
+---
+
+## Участники
+
+