Skip to content

Commit d88bece

Browse files
authored
Introduce LangCacheSemanticCache as a caching option (#90)
1 parent aef1f3f commit d88bece

File tree

11 files changed

+2168
-1296
lines changed

11 files changed

+2168
-1296
lines changed

CLAUDE.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,48 @@ The project is structured as a monorepo with the main library at `libs/redis/`.
1010
cd libs/redis
1111
```
1212

13+
## Virtual Environments
14+
15+
Poetry manages dependencies; some users also use it to automatically manage Python virtual environments.
16+
17+
In this repository you may encounter a project virtual environment already created locally. Common locations:
18+
- The repository root
19+
- `libs/redis/env/`
20+
21+
Common directory names:
22+
- `.venv`
23+
- `env`
24+
- `venv`
25+
26+
Recommended workflow:
27+
1) If a virtual environment exists in the repo, activate it first, then run Python or `make` commands:
28+
```bash
29+
source .venv/bin/activate # or: source libs/redis/env/bin/activate
30+
make test # or any other Make target
31+
```
32+
33+
2) If `poetry` is available on your PATH without activating a venv, you can try to use it directly:
34+
```bash
35+
# From libs/redis/
36+
make test
37+
# or explicitly
38+
poetry run pytest tests/unit_tests/test_specific.py
39+
```
40+
41+
3) If you run `poetry` or `make` and see `poetry: command not found`, Poetry is
42+
not on your PATH. Try to activate the project's virtual environment to see if it
43+
already contains Poetry (e.g., `source libs/redis/env/bin/activate`). If it
44+
doesn't, ask the user if you should install it.
45+
46+
Notes:
47+
- Makefile targets call `poetry run ...`. When a venv is activated and contains
48+
Poetry, `make` will use that Poetry and run inside that venv. When Poetry is
49+
on PATH globally, it will use its managed venv and you do not need to activate
50+
one manually.
51+
- Quick checks:
52+
- `which poetry`
53+
- `TEST_FILE=tests/unit_tests/test_specific.py make test`
54+
1355
### Testing
1456
- `make test` - Run unit tests
1557
- `make integration_tests` - Run integration tests (requires OPENAI_API_KEY)

libs/redis/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ lint lint_diff lint_package lint_tests:
3232
poetry run ruff format $(PYTHON_FILES) --diff
3333
poetry run ruff check $(PYTHON_FILES) --select I $(PYTHON_FILES)
3434
mkdir -p $(MYPY_CACHE); poetry run mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
35+
poetry check
3536

3637
format format_diff:
3738
poetry run ruff format $(PYTHON_FILES)

libs/redis/README.md

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -148,12 +148,12 @@ docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)
148148

149149
### 2. Cache
150150

151-
The `RedisCache` and `RedisSemanticCache` classes provide caching mechanisms for LLM calls.
151+
The `RedisCache`, `RedisSemanticCache`, and `LangCacheSemanticCache` classes provide caching mechanisms for LLM calls.
152152

153153
#### Usage
154154

155155
```python
156-
from langchain_redis import RedisCache, RedisSemanticCache
156+
from langchain_redis import RedisCache, RedisSemanticCache, LangCacheSemanticCache
157157
from langchain_core.language_models import LLM
158158
from langchain_core.embeddings import Embeddings
159159

@@ -168,8 +168,15 @@ semantic_cache = RedisSemanticCache(
168168
distance_threshold=0.1
169169
)
170170

171+
# LangChain cache - manages embeddings for you
172+
langchain_cache = LangCacheSemanticCache(
173+
cache_id="your-cache-id",
174+
api_key="your-api-key",
175+
distance_threshold=0.1
176+
)
177+
171178
# Using cache with an LLM
172-
llm = LLM(cache=cache) # or LLM(cache=semantic_cache)
179+
llm = LLM(cache=cache) # or LLM(cache=semantic_cache) or LLM(cache=langchain_cache)
173180

174181
# Async cache operations
175182
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
@@ -182,6 +189,11 @@ cached_result = await cache.alookup("prompt", "llm_string")
182189
- Semantic caching for similarity-based retrieval
183190
- Asynchronous cache operations
184191

192+
#### What is Redis LangCache?
193+
- LangCache is a fully managed, cloud-based service that provides a semantic cache for LLM applications.
194+
- It manages embeddings and vector search for you, allowing you to focus on your application logic.
195+
- See [our docs](https://redis.io/docs/latest/develop/ai/langcache/) to learn more, or [try LangCache on Redis Cloud today](https://redis.io/docs/latest/operate/rc/langcache/#get-started-with-langcache-on-redis-cloud).
196+
185197
### 3. Chat History
186198

187199
The `RedisChatMessageHistory` class provides a Redis-based storage for chat message history with efficient search capabilities.

0 commit comments

Comments
 (0)