-
Notifications
You must be signed in to change notification settings - Fork 9
Add project isolation support for multi-project deployments #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add project isolation support for multi-project deployments #5
Conversation
- Add projectId to AutoMemConfig interface for project scoping - Update HTTP client to send X-Project-ID header when project is configured - Add --project-id CLI flag to setup command with interactive prompting - Add AUTOMEM_PROJECT_ID environment variable support - Update both main MCP server and recall command to load project ID from env This enables multiple projects to use the same AutoMem backend with isolated memory spaces via the project_id parameter.
📝 WalkthroughWalkthroughThis PR adds optional projectId configuration support throughout the AutoMem system. The projectId can be specified via environment variables, CLI arguments, or programmatic configuration, is validated and persisted through the setup workflow, and is conditionally injected as an X-Project-ID header in client requests. Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI
participant Config
participant AutoMemClient
participant HTTP
User->>CLI: Run setup with --project-id
CLI->>CLI: Read AUTOMEM_PROJECT_ID from env
CLI->>CLI: Parse --project-id argument
CLI->>User: Prompt for Project ID (if interactive)
User->>CLI: Provide projectId
CLI->>Config: Persist projectId to file
Note over User,Config: Later: Using AutoMem with projectId
User->>Config: Initialize with projectId in config
Config->>AutoMemClient: Pass endpoint, apiKey, projectId
User->>AutoMemClient: Make request
AutoMemClient->>AutoMemClient: Check if projectId is set
alt projectId set
AutoMemClient->>HTTP: Add X-Project-ID header
end
AutoMemClient->>HTTP: Send request
HTTP-->>AutoMemClient: Response
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (11)
📒 Files selected for processing (4)
🧰 Additional context used📓 Path-based instructions (3)src/automem-client.ts📄 CodeRabbit inference engine (CLAUDE.md)
Files:
src/types.ts📄 CodeRabbit inference engine (CLAUDE.md)
Files:
src/index.ts📄 CodeRabbit inference engine (CLAUDE.md)
Files:
🧠 Learnings (7)📓 Common learnings📚 Learning: 2025-10-01T01:11:42.805ZApplied to files:
📚 Learning: 2025-10-01T01:11:42.805ZApplied to files:
📚 Learning: 2025-10-01T01:11:42.805ZApplied to files:
📚 Learning: 2025-10-01T01:13:48.061ZApplied to files:
📚 Learning: 2025-10-01T01:11:42.805ZApplied to files:
📚 Learning: 2025-10-01T01:12:18.800ZApplied to files:
🧬 Code graph analysis (2)src/cli/setup.ts (1)
src/index.ts (2)
🔇 Additional comments (12)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Clarification on X-Project-ID Intent@andrewleech Nice addition! Before this gets merged, I was hoping to clarify the intended purpose of Option A: Organizational TaggingIf the goal is to automatically tag/categorize memories by project for better filtering and recall (all data still in the same database), then the backend would:
Option B: True Multi-Tenant IsolationIf the goal is actual data isolation between projects (separate databases, no cross-project data leakage), the backend needs to route requests to different FalkorDB graphs and Qdrant collections per project. There's a PR on the AutoMem backend (verygoodplugins/automem#29) that adds isolation header support via
If Option A is what your after I have a backend PR already prepped too. The linked PR addresses requirements for a true multi-tennant architecture (which I have working), along with needed changes to allow federated memories to each based on context of the memory. Happy to help coordinate MCP and backend alignment. I already had working branches heading for PR when I saw this. Which use case is this targeting? |
|
Should clarify I have a PR for this repo (the MCP) to handle those new headers as well, both at MCP definition, but also on the fly via headers. |
|
My goal was closest to A, I want one installation of the service to be used by multiple projects I'm working on; many of them are worktrees of the same codebase but very different development activities. I'm installing this and using this on a single development server that might have up to 10 different Claude code tmux's running in separate worktrees so I didn't want their individual dev memories to get all mixed together. |
|
@andrewleech have you tested it much? Not quite the use case I had in my head from your initial description, though same solution works I guess. I do question though the veracity of separating their concerns so much with a memory solution like this. Follow up
As this is was originally built for long term, full scale memory across many projects/activities all at once from its inception by @jack-arturo , I'm genuinely curious how a per project folder variant would work in real world. That is to say do you still get the full benefits? Are you having some global and some per project memories mixed for each agent, or is each truly siloed with no idea the others exist and no top-down memories they all share? In any case I have a second PR I haven't submitted yet for the main core that handled automated tagging based on an MCP config, similar to yours. I like your header though better than what I had. I'd be interested if the simple solution I implemented would be sufficient for what you were already testing. or if your actual usage of this concept found more tweaks were required. Look forward to hearing @jack-arturo weigh in too. I overall think its genuinely useful header to offer. |
|
I used it heavily on one development activity that went for a few weeks before needing to be paused and tried to use it on a few others with mixed degrees of success. It was hard to gauge how well it was working to be honest. For context I'm working on embedded system designs based on micropython, so most of the coding is in C and python, but hardware in the loop so there's a mixture of PC tooling, compilers, gdb debugging etc. I already have a shared CLAUDE.md for all related projects which is used in the typical way for overall/shared guidance. The first big project I tried automem with was deeply complex work building a new threading engine for embedded platforms; lots of trial and error work on memory layout and timing debugging. I was hoping automem would be able to work as the long term memory to keep Claude more focused over literally weeks of development on this; most of this time was on finding and fixing bugs in the underlying integration running in hardware. I needed to be able to keep track of what had been theorized, trialed and failed to avoid repeating the same things. At some point during this time automem dropped out though, the docker service had been accidentally killed, so the test pretty much fell apart. I'd also been trying to replace the mcp with a cli tool & hooks to make it get called deterministically. So... lots of variables... It wasn't a very good test. I had automem configured in some other development branches briefly but not enough to know how well it was working really. I started this project separation as I couldn't tell from the documentation if it was safe/supported to use one instance of automem on multiple development efforts or if the memories would all be mixed in together. My primary goal is to find a memory system like this that can maintain an efficient trial and error development log for long running development investigations and help maintain focus across compact cycles. |
|
@andrewleech - I noticed in the recent MCP updates that the hooked automatic memories were removed, not sure how long you used it but prior to that mine created lots of memories in every chat on its own via the queue. Lost a bunch of memories myself in similar fashion, reboot of my machine caused docker lost volumes etc. However if you run it on Railway via the 1 click button, that shouldn't occur as it uses a true persistent volume. Losing them sucks for sure, I lost a lot when I ran backups to move to cloud, but the backup script didn't account for 10k row hard coded query limits in FalkorDB, causing me to only get 10k of 15k memories/connections. Submitted a PR shortly after to fix that. Honestly its meant to just run without switching/segmentation. The auto-hooked-memory is something I've already pointed out to @jack-arturo needs brought back, but in general once you get Claude correctly utilizing it, its quite good at keeping things on track. |
|
Thank you for this contribution @andrewleech! Project isolation is a valuable feature for multi-tenant deployments. Unfortunately, this PR has become significantly out of date (3+ months, many changes to main) and would require substantial rebasing to merge cleanly. I'm opening a new issue to track this feature request. If you're interested in updating this, feel free to rebase on main and reopen, or we can implement it fresh in a future release. Thanks for your contribution to the project! |
Summary
Adds project isolation support to enable a single AutoMem backend to serve multiple projects with completely separate memory spaces.
Changes
Benefits
Usage
This works with the corresponding backend changes that add project_id scoping to Memory nodes and Qdrant collections.