Skip to content

Latest commit

 

History

History
 
 

README.md

🚀 Deep Research

🚀 Quickstart

Prerequisites: Install uv package manager:

curl -LsSf https://astral.sh/uv/install.sh | sh

Ensure you are in the deep_research directory:

cd examples/deep_research

Install packages:

uv sync

Set your API keys in your environment:

export ANTHROPIC_API_KEY=your_anthropic_api_key_here  # Required for Claude model
export GOOGLE_API_KEY=your_google_api_key_here        # Required for Gemini model ([get one here](https://ai.google.dev/gemini-api/docs))
export TAVILY_API_KEY=your_tavily_api_key_here        # Required for web search ([get one here](https://www.tavily.com/)) with a generous free tier
export LANGSMITH_API_KEY=your_langsmith_api_key_here  # [LangSmith API key](https://smith.langchain.com/settings) (free to sign up)

Usage Options

You can run this example in two ways:

Option 1: Jupyter Notebook

Run the interactive notebook to step through the research agent:

uv run jupyter notebook research_agent.ipynb

Option 2: LangGraph Server

Run a local LangGraph server with a web interface:

langgraph dev

LangGraph server will open a new browser window with the Studio interface, which you can submit your search query to:

Screenshot 2025-11-17 at 11 42 59 AM

You can also connect the LangGraph server to a UI specifically designed for deepagents:

git clone https://github.com/langchain-ai/deep-agents-ui.git
cd deep-agents-ui
yarn install
yarn dev

Then follow the instructions in the deep-agents-ui README to connect the UI to the running LangGraph server.

This provides a user-friendly chat interface and visualization of files in state.

Screenshot 2025-11-17 at 1 11 27 PM

📚 Resources

Custom Model

By default, deepagents uses "claude-sonnet-4-5-20250929". You can customize this by passing any LangChain model object. See the Deep Agents package README for more details.

from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent

# Using Claude
model = init_chat_model(model="anthropic:claude-sonnet-4-5-20250929", temperature=0.0)

# Using Gemini
from langchain_google_genai import ChatGoogleGenerativeAI
model = ChatGoogleGenerativeAI(model="gemini-3-pro-preview")

agent = create_deep_agent(
    model=model,
)

Custom Instructions

The deep research agent uses custom instructions defined in research_agent/prompts.py that complement (rather than duplicate) the default middleware instructions. You can modify these in any way you want.

Instruction Set Purpose
RESEARCH_WORKFLOW_INSTRUCTIONS Defines the 5-step research workflow: save request → plan with TODOs → delegate to sub-agents → synthesize → respond. Includes research-specific planning guidelines like batching similar tasks and scaling rules for different query types.
SUBAGENT_DELEGATION_INSTRUCTIONS Provides concrete delegation strategies with examples: simple queries use 1 sub-agent, comparisons use 1 per element, multi-faceted research uses 1 per aspect. Sets limits on parallel execution (max 3 concurrent) and iteration rounds (max 3).
RESEARCHER_INSTRUCTIONS Guides individual research sub-agents to conduct focused web searches. Includes hard limits (2-3 searches for simple queries, max 5 for complex), emphasizes using think_tool after each search for strategic reflection, and defines stopping criteria.

Custom Tools

The deep research agent adds the following custom tools beyond the built-in deepagent tools. You can also use your own tools, including via MCP servers. See the Deep Agents package README for more details.

Tool Name Description
tavily_search Web search tool that uses Tavily purely as a URL discovery engine. Performs searches using Tavily API to find relevant URLs, fetches full webpage content via HTTP with proper User-Agent headers (avoiding 403 errors), converts HTML to markdown, and returns the complete content without summarization to preserve all information for the agent's analysis. Works with both Claude and Gemini models.
think_tool Strategic reflection mechanism that helps the agent pause and assess progress between searches, analyze findings, identify gaps, and plan next steps.