Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 15 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

**LangGraph-UP Monorepo** showcases how to build production-ready LangGraph agents using the latest **LangChain & LangGraph** V1 ecosystem, organized in a clean monorepo structure with shared libraries and multiple agent applications.

[![Version](https://img.shields.io/badge/version-v0.2.0-blue.svg)](https://github.com/webup/langgraph-up-monorepo/releases/tag/v0.2.0)
[![LangChain](https://img.shields.io/badge/LangChain-v1alpha-blue.svg)](https://github.com/langchain-ai/langchain)
[![LangGraph](https://img.shields.io/badge/LangGraph-v1alpha-blue.svg)](https://github.com/langchain-ai/langgraph)
[![Version](https://img.shields.io/badge/version-v0.3.0-blue.svg)](https://github.com/webup/langgraph-up-monorepo/releases/tag/v0.3.0)
[![LangChain](https://img.shields.io/badge/LangChain-v1.0-blue.svg)](https://github.com/langchain-ai/langchain)
[![LangGraph](https://img.shields.io/badge/LangGraph-v1.0-blue.svg)](https://github.com/langchain-ai/langgraph)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![PyPI](https://img.shields.io/badge/PyPI-langgraph--up--devkits-blue.svg)](https://test.pypi.org/project/langgraph-up-devkits/)
[![Twitter](https://img.shields.io/twitter/follow/zhanghaili0610?style=social)](https://twitter.com/zhanghaili0610)
Expand All @@ -13,7 +13,8 @@

- 🌐 **Universal Model Loading** - OpenRouter, Qwen, QwQ, SiliconFlow with automatic registration
- πŸ€– **Multi-Agent Orchestration** - Supervisor & deep research patterns with specialized sub-agents
- πŸ›  **Custom Middleware** - Model switching, file masking, summarization, and state management
- πŸ›  **LangChain v1.0 Middleware** - Model switching, file masking, summarization with v1.0 pattern
- πŸ”¬ **Deep Research Agent** - Advanced research workflow with deepagents integration
- πŸ§ͺ **Developer Experience** - Hot reload, comprehensive testing, strict linting, PyPI publishing
- πŸš€ **Deployment Ready** - LangGraph Cloud configurations included
- 🌍 **Global Ready** - Region-based provider configuration (PRC/International)
Expand Down Expand Up @@ -154,17 +155,17 @@ math_to_research = create_handoff_tool("research_expert")
research_to_math = create_handoff_tool("math_expert")
```

#### πŸ”§ Custom Middleware (LangChain v1)
#### πŸ”§ Custom Middleware (LangChain v1.0)

Built-in middleware for dynamic model switching, state management, and behavior modification:
Built-in middleware for dynamic model switching, state management, and behavior modification using the **LangChain v1.0 middleware pattern**:

```python
from langchain.agents import create_agent
from langgraph_up_devkits import (
from langgraph_up_devkits.middleware import (
ModelProviderMiddleware,
FileSystemMaskMiddleware,
load_chat_model
)
from langgraph_up_devkits import load_chat_model

# Model provider middleware for automatic switching
model_middleware = ModelProviderMiddleware()
Expand All @@ -183,11 +184,16 @@ context = {"model": "siliconflow:Qwen/Qwen3-8B"}
result = await agent.ainvoke(messages, context=context)
```

**Available Middleware:**
**Available Middleware (v1.0 Compatible):**
- `ModelProviderMiddleware` - Dynamic model switching based on context
- `FileSystemMaskMiddleware` - Masks virtual file systems from LLM to save tokens
- `SummarizationMiddleware` - Automatic message summarization for long conversations

**Key Changes in v1.0:**
- Migrated to LangChain v1.0 middleware pattern with `before_model()` and `after_model()` hooks
- Compatible with `langchain.agents.create_agent` middleware system
- Improved state management and model switching reliability

For detailed documentation on additional features like middleware, tools, and utilities, see:

- **Framework Documentation**: [`libs/langgraph-up-devkits/README.md`](libs/langgraph-up-devkits/README.md)
Expand Down
2 changes: 1 addition & 1 deletion apps/sample-deep-agent/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ readme = "README.md"
license = { text = "MIT" }
requires-python = ">=3.11,<4.0"
dependencies = [
"deepagents>=0.0.11",
"deepagents>=0.1.1",
"langgraph-up-devkits",
]

Expand Down
2 changes: 1 addition & 1 deletion apps/sample-deep-agent/src/sample_deep_agent/context.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ class DeepAgentContext(BaseModel):
"""Context configuration for deep agent runtime settings."""

# Model configuration
model_name: str = Field(default="siliconflow:zai-org/GLM-4.5", description="Default model name")
model_name: str = Field(default="siliconflow:deepseek-ai/DeepSeek-V3.2-Exp", description="Default model name")

# Graph configuration
recursion_limit: int = Field(default=1000, description="Recursion limit for agent execution")
Expand Down
12 changes: 6 additions & 6 deletions apps/sample-deep-agent/src/sample_deep_agent/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from typing import Any

from deepagents import async_create_deep_agent # type: ignore[import-untyped]
from deepagents import create_deep_agent # type: ignore[import-untyped]
from langchain_core.runnables import RunnableConfig
from langgraph_up_devkits import load_chat_model
from langgraph_up_devkits.tools import deep_web_search, think_tool
Expand Down Expand Up @@ -35,12 +35,12 @@ def make_graph(config: RunnableConfig | None = None) -> Any:
# Load model based on context configuration
model = load_chat_model(context.model_name)

# Create deep agent with research capabilities (remove research_sub_agent from subagents list)
agent = async_create_deep_agent(
tools=[deep_web_search, think_tool],
instructions=get_research_instructions(),
subagents=RESEARCH_AGENTS, # Research agent in subagents list
# Create deep agent with research capabilities
agent = create_deep_agent(
model=model,
tools=[deep_web_search, think_tool],
system_prompt=get_research_instructions(),
subagents=RESEARCH_AGENTS,
context_schema=DeepAgentContext,
).with_config({"recursion_limit": context.recursion_limit})

Expand Down
4 changes: 2 additions & 2 deletions apps/sample-deep-agent/src/sample_deep_agent/subagents.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"into the necessary components, and then call multiple research agents in parallel, "
"one for each sub question."
),
"prompt": SUB_RESEARCH_PROMPT,
"system_prompt": SUB_RESEARCH_PROMPT,
"middleware": [filesystem_mask],
}

Expand All @@ -27,7 +27,7 @@
"Used to critique the final report. Give this agent some information about "
"how you want it to critique the report."
),
"prompt": SUB_CRITIQUE_PROMPT,
"system_prompt": SUB_CRITIQUE_PROMPT,
"middleware": [filesystem_mask],
}

Expand Down
8 changes: 6 additions & 2 deletions apps/sample-deep-agent/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,13 @@
import pytest


@pytest.fixture(autouse=True)
@pytest.fixture
def mock_openai_key(monkeypatch):
"""Mock API keys to prevent API key errors in unit tests."""
"""Mock API keys to prevent API key errors in unit tests.

Note: This fixture is NOT autouse - it must be explicitly requested by unit tests.
Integration tests need real API keys from the environment.
"""
monkeypatch.setenv("OPENAI_API_KEY", "sk-test-fake-key-for-unit-tests")
monkeypatch.setenv("OPENROUTER_API_KEY", "sk-test-fake-key-for-unit-tests")
monkeypatch.setenv("SILICONFLOW_API_KEY", "sk-test-fake-key-for-unit-tests")
Expand Down
52 changes: 26 additions & 26 deletions apps/sample-deep-agent/tests/unit/test_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
class TestGraphCreation:
"""Test graph creation and configuration."""

@patch('sample_deep_agent.graph.async_create_deep_agent')
@patch('sample_deep_agent.graph.create_deep_agent')
@patch('sample_deep_agent.graph.load_chat_model')
def test_make_graph_with_default_config(self, mock_load_model, mock_create_deep_agent):
"""Test graph creation with default configuration."""
Expand All @@ -25,7 +25,7 @@ def test_make_graph_with_default_config(self, mock_load_model, mock_create_deep_

result = make_graph()

mock_load_model.assert_called_once_with("siliconflow:zai-org/GLM-4.5")
mock_load_model.assert_called_once_with("siliconflow:deepseek-ai/DeepSeek-V3.2-Exp")
mock_create_deep_agent.assert_called_once()

call_args = mock_create_deep_agent.call_args
Expand All @@ -40,7 +40,7 @@ def test_make_graph_with_default_config(self, mock_load_model, mock_create_deep_

assert result == mock_agent_with_config

@patch('sample_deep_agent.graph.async_create_deep_agent')
@patch('sample_deep_agent.graph.create_deep_agent')
@patch('sample_deep_agent.graph.load_chat_model')
def test_make_graph_with_custom_config(self, mock_load_model, mock_create_deep_agent):
"""Test graph creation with custom configuration."""
Expand All @@ -67,14 +67,14 @@ def test_make_graph_with_custom_config(self, mock_load_model, mock_create_deep_a

call_args = mock_create_deep_agent.call_args

# Verify instructions are present (runtime context will be used during actual execution)
instructions = call_args[1]['instructions']
assert "expert research coordinator" in instructions.lower()
assert "TODOs" in instructions
# Verify system_prompt is present (runtime context will be used during actual execution)
system_prompt = call_args[1]['system_prompt']
assert "expert research coordinator" in system_prompt.lower()
assert "TODOs" in system_prompt

assert result == mock_agent_with_config

@patch('sample_deep_agent.graph.async_create_deep_agent')
@patch('sample_deep_agent.graph.create_deep_agent')
@patch('sample_deep_agent.graph.load_chat_model')
def test_make_graph_with_none_config(self, mock_load_model, mock_create_deep_agent):
"""Test graph creation with None config."""
Expand All @@ -90,12 +90,12 @@ def test_make_graph_with_none_config(self, mock_load_model, mock_create_deep_age
result = make_graph(None)

# Should use default configuration
mock_load_model.assert_called_once_with("siliconflow:zai-org/GLM-4.5")
mock_load_model.assert_called_once_with("siliconflow:deepseek-ai/DeepSeek-V3.2-Exp")
mock_create_deep_agent.assert_called_once()

assert result == mock_agent_with_config

@patch('sample_deep_agent.graph.async_create_deep_agent')
@patch('sample_deep_agent.graph.create_deep_agent')
@patch('sample_deep_agent.graph.load_chat_model')
def test_tools_are_included(self, mock_load_model, mock_create_deep_agent):
"""Test that required tools are included."""
Expand Down Expand Up @@ -156,36 +156,36 @@ class TestSubAgentConfiguration:
def test_subagent_configuration_valid(self):
"""Test that sub-agent configuration is valid for deepagents."""
# Verify all required fields are present
required_fields = ["name", "description", "prompt"]
required_fields = ["name", "description", "system_prompt"]
for field in required_fields:
assert field in research_sub_agent

# Research agent no longer has explicit tools - they're passed from main agent
# Check that the prompt contains TODO constraints instead
# Check that the system_prompt contains TODO constraints instead

# Verify name and description
assert research_sub_agent["name"] == "research-agent"
assert "research" in research_sub_agent["description"].lower()

# Verify prompt contains expected elements
prompt = research_sub_agent["prompt"]
assert "researcher" in prompt.lower()
assert "TODO CONSTRAINTS" in prompt
assert "GLOBAL TODO LIMIT" in prompt
assert str(3) in prompt # MAX_TODOS value
# Verify system_prompt contains expected elements
system_prompt = research_sub_agent["system_prompt"]
assert "researcher" in system_prompt.lower()
assert "TODO CONSTRAINTS" in system_prompt
assert "GLOBAL TODO LIMIT" in system_prompt
assert str(3) in system_prompt # MAX_TODOS value

def test_critique_agent_configuration(self):
"""Test critique agent configuration."""
assert critique_sub_agent["name"] == "critique-agent"
assert "critique" in critique_sub_agent["description"].lower()
assert "prompt" in critique_sub_agent
assert "system_prompt" in critique_sub_agent

# Critique agent no longer has explicit tools - they're passed from main agent
# Check that the prompt contains TODO constraints
prompt = critique_sub_agent["prompt"]
assert "TODO CONSTRAINTS" in prompt
assert "GLOBAL TODO LIMIT" in prompt
assert str(3) in prompt # MAX_TODOS value
# Check that the system_prompt contains TODO constraints
system_prompt = critique_sub_agent["system_prompt"]
assert "TODO CONSTRAINTS" in system_prompt
assert "GLOBAL TODO LIMIT" in system_prompt
assert str(3) in system_prompt # MAX_TODOS value


class TestAppExport:
Expand Down Expand Up @@ -228,9 +228,9 @@ def test_tool_integration_in_graph(self):
assert hasattr(deep_web_search, "invoke") or callable(deep_web_search)
assert hasattr(think_tool, "invoke") or callable(think_tool)

@patch('sample_deep_agent.graph.async_create_deep_agent')
@patch('sample_deep_agent.graph.create_deep_agent')
def test_tools_passed_to_deep_agent(self, mock_create_deep_agent):
"""Test that tools are properly passed to async_create_deep_agent."""
"""Test that tools are properly passed to create_deep_agent."""
from sample_deep_agent.graph import make_graph

mock_agent = Mock()
Expand Down
Loading