Skip to content

Conversation

@sillydong
Copy link

@sillydong sillydong commented Dec 1, 2025

Description

In this PR #2369, it proposed a way to get workspace from request header. It focused on pipeline status and in-mem storage. For the routers, it does not cover. All requests sent to server is still in the hardcoded workspace in env. In this new PR, they are fixed.

  • There will be a dict of LightRAG instances and a dict of DocumentManager instances for different workspace kept in memory.
  • All routes (document/graph/ollama/query) supports get workspace from header.
  • WebUI supports switch workspace by manually input and update. User can easily change workspace in webpage.

Related Issues

[Reference any related issues or tasks addressed by this pull request.]
Issue #1289
Issue #2373
Discussion #1016

Changes Made

[List the specific changes made in this pull request.]

  • create create_doc_manager and create_rag in lightrag_server and pass to routers
  • get workspace from request header, use create_doc_manager and create_rag to create instances for given workspace
  • add input and button in webui to allow user change workspace
  • send workspace with request in header LIGHTRAG-WORKSPACE

Checklist

  • Changes tested locally
  • Code reviewed
  • Documentation updated (if necessary)
  • Unit tests added (if applicable)

Additional Notes

[Add any additional notes or context for the reviewer(s).]


Note

Route all API requests to per-workspace LightRAG/DocumentManager instances via LIGHTRAG-WORKSPACE header and add WebUI control to switch workspaces.

  • Backend (FastAPI):
    • Implement per-workspace caching with create_rag(request) and create_doc_manager(request); derive workspace from LIGHTRAG-WORKSPACE header (fallback to default).
    • Update all routers (documents, graph, query, ollama) to accept Request and fetch workspace-scoped instances; change include_router usage to pass factory functions.
    • Adjust lifespan to pre-create default instances and finalize all cached RAG storages on shutdown; expose current workspace in /health; set default via set_default_workspace.
  • WebUI:
    • Add workspace input and update action in header; persist to localStorage and reload.
    • Send LIGHTRAG-WORKSPACE with all requests (axios interceptor and streaming fetch).

Written by Cursor Bugbot for commit 4d2d781. This will update automatically on new commits. Configure here.

@sillydong
Copy link
Author

@codex review

@chatgpt-codex-connector
Copy link

To use Codex here, create a Codex account and connect to github.

@sillydong
Copy link
Author

Hi @danielaskdd I proposed a PR based on your change to add workspace support for all APIs. Please help review.

@sillydong sillydong changed the title finish implement workspace isolation in lightrag_server feat: Finish implement workspace isolation in lightrag_server Dec 2, 2025
@sillydong sillydong marked this pull request as ready for review December 3, 2025 04:50
@LarFii
Copy link
Collaborator

LarFii commented Dec 11, 2025

@cursor review

return rag
except Exception as e:
logger.error(f"Failed to initialize LightRAG: {e}")
raise
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Race condition when creating cached LightRAG instances

The create_rag and create_doc_manager functions check and modify rag_cache and doc_manager_cache without any synchronization. When multiple concurrent requests arrive for the same workspace, each can pass the cache check (if workspace in rag_cache) before any has finished initializing and storing the instance. This causes multiple LightRAG instances to be created for the same workspace, with the expensive initialize_storages() and check_and_migrate_data() running in parallel. The last one to finish overwrites others in the cache, potentially leaving orphaned resources. An asyncio.Lock per-cache (or per-workspace keyed lock) is needed to make the check-and-create pattern atomic.

Additional Locations (1)

Fix in Cursor Fix in Web

@francis2tm
Copy link

Hey @danielaskdd , is there something I can do to help merge this PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants