Grok Search MCP is an MCP server built on FastMCP, featuring a dual-engine architecture: Grok handles AI-driven intelligent search, while Tavily handles high-fidelity web content extraction and site mapping. Together they provide complete real-time web access for LLM clients such as Claude Code and Cherry Studio.
Claude --MCP--> Grok Search Server
├─ web_search ---> Grok API (AI Search)
├─ web_fetch ---> Tavily Extract (Content Extraction)
└─ web_map ---> Tavily Map (Site Mapping)
- Dual Engine: Grok search + Tavily extraction/mapping, complementary collaboration
- OpenAI-compatible interface, supports any Grok mirror endpoint
- Automatic time injection (detects time-related queries, injects local time context)
- One-click disable Claude Code's built-in WebSearch/WebFetch, force routing to this tool
- Smart retry (Retry-After header parsing + exponential backoff)
- Parent process monitoring (auto-detects parent process exit on Windows, prevents zombie processes)
Using cherry studio with this MCP configured, here's how claude-opus-4.6 leverages this project for external knowledge retrieval, reducing hallucination rates.
As shown above, for a fair experiment, we enabled Claude's built-in search tools, yet Opus 4.6 still relied on its internal knowledge without consulting FastAPI's official documentation for the latest examples.
As shown above, with grok-search MCP enabled under the same experimental conditions, Opus 4.6 proactively made multiple search calls to retrieve official documentation, producing more reliable answers.
- Python 3.10+
- uv (recommended Python package manager)
- Claude Code
Install uv
# Linux/macOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows PowerShell
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"Windows users are strongly recommended to run this project in WSL.
If you have previously installed this project, remove the old MCP first:
claude mcp remove grok-search
Replace the environment variables in the following command with your own values. The Grok endpoint must be OpenAI-compatible; Tavily is optional — web_fetch and web_map will be unavailable without it.
GuDa users only need to set GUDA_API_KEY to access all services — API URLs are automatically derived:
claude mcp add-json grok-search --scope user '{
"type": "stdio",
"command": "uvx",
"args": [
"--from",
"git+https://github.com/GuDaStudio/GrokSearch@grok-with-tavily",
"grok-search"
],
"env": {
"GUDA_API_KEY": "your-guda-api-key"
}
}'To use your own API endpoints, configure each service separately:
claude mcp add-json grok-search --scope user '{
"type": "stdio",
"command": "uvx",
"args": [
"--from",
"git+https://github.com/GuDaStudio/GrokSearch@grok-with-tavily",
"grok-search"
],
"env": {
"GROK_API_URL": "https://your-api-endpoint.com/v1",
"GROK_API_KEY": "your-grok-api-key",
"TAVILY_API_KEY": "tvly-your-tavily-key",
"TAVILY_API_URL": "https://api.tavily.com"
}
}'You can also configure additional environment variables in the env field:
| Variable | Required | Default | Description |
|---|---|---|---|
GUDA_API_KEY |
No | - | GuDa API key (auto-derives all service URLs and keys when set) |
GUDA_BASE_URL |
No | https://code.guda.studio |
GuDa service base URL |
GROK_API_URL |
No | {GUDA_BASE_URL}/grok/v1 |
Grok API endpoint (OpenAI-compatible), overrides GuDa-derived value |
GROK_API_KEY |
No | {GUDA_API_KEY} |
Grok API key, overrides GuDa-derived value |
GROK_MODEL |
No | grok-4.20-beta |
Default model (takes precedence over ~/.config/grok-search/config.json when set) |
TAVILY_API_KEY |
No | {GUDA_API_KEY} |
Tavily API key (for web_fetch / web_map) |
TAVILY_API_URL |
No | {GUDA_BASE_URL}/tavily |
Tavily API endpoint |
TAVILY_ENABLED |
No | true |
Enable Tavily |
FIRECRAWL_API_KEY |
No | {GUDA_API_KEY} |
Firecrawl API key (fallback when Tavily fails) |
FIRECRAWL_API_URL |
No | {GUDA_BASE_URL}/firecrawl |
Firecrawl API endpoint |
GROK_DEBUG |
No | false |
Debug mode |
GROK_LOG_LEVEL |
No | INFO |
Log level |
GROK_LOG_DIR |
No | logs |
Log directory |
GROK_RETRY_MAX_ATTEMPTS |
No | 3 |
Max retry attempts |
GROK_RETRY_MULTIPLIER |
No | 1 |
Retry backoff multiplier |
GROK_RETRY_MAX_WAIT |
No | 10 |
Max retry wait in seconds |
Note: When
GUDA_API_KEYis set, allGROK_API_URL/GROK_API_KEY/TAVILY_*/FIRECRAWL_*variables become optional as they are auto-derived fromGUDA_BASE_URL. Explicitly set variables take higher priority.
claude mcp listAfter confirming a successful connection, we highly recommend typing the following in a Claude conversation:
Call grok-search toggle_builtin_tools to disable Claude Code's built-in WebSearch and WebFetch tools
This will automatically modify the project-level .claude/settings.json permissions.deny, disabling Claude Code's built-in WebSearch and WebFetch, forcing Claude Code to use this project for searches!
This project provides eight MCP tools (click to expand)
Executes AI-driven web search via Grok API. By default it returns only Grok's answer and a session_id for retrieving sources later.
web_search does not expand sources in the response; it only returns sources_count. Sources are cached server-side by session_id and can be fetched with get_sources.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query |
string | Yes | - | Search query |
platform |
string | No | "" |
Focus platform (e.g., "Twitter", "GitHub, Reddit") |
model |
string | No | null |
Per-request Grok model ID |
extra_sources |
int | No | 0 |
Extra sources via Tavily/Firecrawl (0 disables) |
Automatically detects time-related keywords in queries (e.g., "latest", "today", "recent"), injecting local time context to improve accuracy for time-sensitive searches.
Return value (structured dict):
session_id: search session IDcontent: answer only (sources removed)sources_count: cached sources count
Retrieves the full cached source list for a previous web_search call.
| Parameter | Type | Required | Description |
|---|---|---|---|
session_id |
string | Yes | session_id returned by web_search |
Return value (structured dict):
session_idsources_countsources: source list (each item includesurl, may includetitle/description/provider)
Extracts complete web content via Tavily Extract API, returning Markdown format.
| Parameter | Type | Required | Description |
|---|---|---|---|
url |
string | Yes | Target webpage URL |
Traverses website structure via Tavily Map API, discovering URLs and generating a site map.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url |
string | Yes | - | Starting URL |
instructions |
string | No | "" |
Natural language filtering instructions |
max_depth |
int | No | 1 |
Max traversal depth (1-5) |
max_breadth |
int | No | 20 |
Max links to follow per page (1-500) |
limit |
int | No | 50 |
Total link processing limit (1-500) |
timeout |
int | No | 150 |
Timeout in seconds (10-150) |
No parameters required. Displays all configuration status, tests Grok API connection, returns response time and available model list (API keys auto-masked).
| Parameter | Type | Required | Description |
|---|---|---|---|
model |
string | Yes | Model ID (e.g., "grok-4-fast", "grok-2-latest") |
Settings persist to ~/.config/grok-search/config.json across sessions.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
action |
string | No | "status" |
"on" disable built-in tools / "off" enable built-in tools / "status" check status |
Modifies project-level .claude/settings.json permissions.deny to disable Claude Code's built-in WebSearch and WebFetch.
A structured multi-phase planning scaffold to generate an executable search plan before running complex searches.
