-
Notifications
You must be signed in to change notification settings - Fork 22
Description
Problem
Current memory cleanup (if implemented at the agent level) is destructive: old or duplicate entries are deleted. This loses information — 5 related entries become 2 surviving entries rather than 1 dense entry containing the essence of all 5.
As memory modules approach their size limits, important historical context is dropped rather than compressed.
Proposed Solution
Adaptive Compaction: when a memory module exceeds a threshold, run an LLM summarization pass that compresses N related entries into 1 semantically-rich entry (reference: AgentRM paper, arxiv 2603.13110).
Before compaction (5 entries, ~40 lines):
- Decision A made on 2026-01-10
- Decision A revised on 2026-01-15
- Decision A applied to project X
- Related decision B on 2026-02-01
- Decision B supersedes part of A
After compaction (1 entry, ~5 lines):
- [2026-02] Decisions A+B: <semantic summary preserving all key facts>
Trigger conditions
memory:
compaction:
enabled: true
trigger_lines: 70 # compact when module exceeds this
target_lines: 40 # compact down to this
preserve_recency_days: 14 # never compact entries newer than N daysDifference from destructive cleanup
| Approach | What happens to old entries |
|---|---|
| Destructive cleanup | Deleted if "stale" or "duplicate" |
| Adaptive compaction | LLM-summarized into denser form, semantics preserved |
Compaction is complementary to cleanup — cleanup removes clearly obsolete facts, compaction preserves knowledge that's still relevant but verbose.
Implementation
Could be a built-in hook triggered after memory writes, or exposed as a framework utility callable from agent memory maintenance crons.