Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
111 changes: 56 additions & 55 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ A Model Context Protocol server that provides **Google Search via Serper**. This
- `google_search_patents` - Set [all the parameters](src/serper_mcp_server/schemas.py#L56)
- `google_search_autocomplete` - Set [all the parameters](src/serper_mcp_server/schemas.py#L20)
- `webpage_scrape` - Set [all the parameters](src/serper_mcp_server/schemas.py#L62)

- `deep_research` - Performs a parallel multi-angle search (General, Technical, Reddit/HN) and aggregates unique results. Set [parameters](src/serper_mcp_server/schemas.py#L127)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Main change made. Can ignore the rest - they are just clean up.


## Usage

Expand All @@ -39,72 +39,74 @@ npx -y @smithery/cli install @garylab/serper-mcp-server --client claude
1. Make sure you had installed [`uv`](https://docs.astral.sh/uv/) on your os system.

2. In your MCP client code configuration or **Claude** settings (file `claude_desktop_config.json`) add `serper` mcp server:
```json
{
"mcpServers": {
"serper": {
"command": "uvx",
"args": ["serper-mcp-server"],
"env": {
"SERPER_API_KEY": "<Your Serper API key>"
}
}
}
}
```
`uv` will download mcp server automatically using `uvx` from [pypi.org](https://pypi.org/project/serper-mcp-server/) and apply to your MCP client.
```json
{
"mcpServers": {
"serper": {
"command": "uvx",
"args": ["serper-mcp-server"],
"env": {
"SERPER_API_KEY": "<Your Serper API key>"
}
}
}
}
```
`uv` will download mcp server automatically using `uvx` from [pypi.org](https://pypi.org/project/serper-mcp-server/) and apply to your MCP client.

### Using `pip` for project

1. Add `serper-mcp-server` to your MCP client code `requirements.txt` file.
```txt
serper-mcp-server
```

```txt
serper-mcp-server
```

2. Install the dependencies.
```shell
pip install -r requirements.txt
```

3. Add the configuration for you client:
```json
{
"mcpServers": {
"serper": {
"command": "python3",
"args": ["-m", "serper_mcp_server"],
"env": {
"SERPER_API_KEY": "<Your Serper API key>"
}
}
}
}
```
```shell
pip install -r requirements.txt
```

3. Add the configuration for you client:
```json
{
"mcpServers": {
"serper": {
"command": "python3",
"args": ["-m", "serper_mcp_server"],
"env": {
"SERPER_API_KEY": "<Your Serper API key>"
}
}
}
}
```

### Using `pip` for globally usage

1. Make sure the `pip` or `pip3` is in your os system.
```bash
pip install serper-mcp-server
# or
pip3 install serper-mcp-server
```

2. MCP client code configuration or **Claude** settings, add `serper` mcp server:
```json
{
"mcpServers": {
"serper": {
"command": "python3",
"args": ["serper-mcp-server"],
"env": {
"SERPER_API_KEY": "<Your Serper API key>"
}
}
}
}
```
```bash
pip install serper-mcp-server
# or
pip3 install serper-mcp-server
```

2. MCP client code configuration or **Claude** settings, add `serper` mcp server:
```json
{
"mcpServers": {
"serper": {
"command": "python3",
"args": ["serper-mcp-server"],
"env": {
"SERPER_API_KEY": "<Your Serper API key>"
}
}
}
}
```

## Debugging

Expand All @@ -122,7 +124,6 @@ cd serper-mcp-server
npx @modelcontextprotocol/inspector uv run serper-mcp-server -e SERPER_API_KEY=<the key>
```


## License

serper-mcp-server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
2 changes: 1 addition & 1 deletion src/serper_mcp_server/__init__.py
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

init.py causes a RuntimeWarning because it is importing server at the top level when the package is initialized, before server.py is executed as main. I moved the import inside main() in init.py to defer the import and suppress the warning.
Now, when you run python -m src.serper_mcp_server.server, the init.py file runs but doesn't touch server.py, allowing Python to execute server.py freshly as the main module without conflict.

If preferred, can safely ignore this file change without breaking changes.

Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
import asyncio
import argparse
from . import server


def main():
from . import server
# parser = argparse.ArgumentParser()
# parser.add_argument("--q", type=str, help="The query to search for")
# args = parser.parse_args()
Expand Down
58 changes: 55 additions & 3 deletions src/serper_mcp_server/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
import aiohttp
from pydantic import BaseModel
from .enums import SerperTools
from .schemas import WebpageRequest
from .schemas import WebpageRequest, SearchRequest, DeepResearchRequest
import asyncio as _asyncio

SERPER_API_KEY = str.strip(os.getenv("SERPER_API_KEY", ""))
AIOHTTP_TIMEOUT = int(os.getenv("AIOHTTP_TIMEOUT", "15"))
Expand All @@ -22,6 +23,53 @@ async def scape(request: WebpageRequest) -> Dict[str, Any]:
return await fetch_json(url, request)



async def deep_research(request: DeepResearchRequest) -> list[Dict[str, Any]]:
query = request.q

# Generate 3 sub-queries for different search angles
sub_queries = [
query, # General
f"{query} technical specifications benchmarks", # Technical
f"{query} site:reddit.com OR site:hackernews.com", # Community
]

# Create SearchRequest objects for each sub-query
search_requests = [SearchRequest(q=sq, num="10") for sq in sub_queries]

# Fire all 3 requests in parallel using asyncio.gather
# CRITICAL: return_exceptions=True ensures one failure doesn't crash the batch
results = await _asyncio.gather(
*[google(SerperTools.GOOGLE_SEARCH, req) for req in search_requests],
return_exceptions=True
)

# Prune and deduplicate results
unique_results: Dict[str, Dict[str, Any]] = {}

for result in results:
# Skip failed requests (exceptions)
if isinstance(result, Exception):
continue

# Focus ONLY on the "organic" list in the JSON response
organic = result.get("organic", [])

for item in organic:
link = item.get("link")
if link and link not in unique_results:
# Extract only high-value fields for token economy
unique_results[link] = {
"title": item.get("title"),
"link": link,
"snippet": item.get("snippet"),
"date": item.get("date"),
}

# Return pruned, unique list
return list(unique_results.values())


async def fetch_json(url: str, request: BaseModel) -> Dict[str, Any]:
payload = request.model_dump(exclude_none=True)
headers = {
Expand All @@ -35,5 +83,9 @@ async def fetch_json(url: str, request: BaseModel) -> Dict[str, Any]:
timeout = aiohttp.ClientTimeout(total=AIOHTTP_TIMEOUT)
async with aiohttp.ClientSession(connector=connector, timeout=timeout) as session:
async with session.post(url, headers=headers, json=payload) as response:
response.raise_for_status()
return await response.json()
try:
response.raise_for_status()
return await response.json()
except aiohttp.ClientResponseError as e:
raise e

1 change: 1 addition & 0 deletions src/serper_mcp_server/enums.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ class SerperTools(StrEnum):
GOOGLE_SEARCH_PATENTS = "google_search_patents"
GOOGLE_SEARCH_AUTOCOMPLETE = "google_search_autocomplete"
WEBPAGE_SCRAPE = "webpage_scrape"
DEEP_RESEARCH = "deep_research"

@classmethod
def has_value(cls, value: str) -> bool:
Expand Down
5 changes: 5 additions & 0 deletions src/serper_mcp_server/schemas.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,8 @@ class WebpageRequest(BaseModel):
pattern=r"^(true|false)$",
description="Include markdown in the response (boolean value as string: 'true' or 'false')",
)


class DeepResearchRequest(BaseModel):
"""Request model for deep research - focuses on the query field only."""
q: str = Field(..., description="The query to perform deep research on")
27 changes: 23 additions & 4 deletions src/serper_mcp_server/server.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
from dotenv import load_dotenv

load_dotenv()

from typing import Any, List, Sequence
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent, ImageContent, EmbeddedResource
from dotenv import load_dotenv
import json
import asyncio

from .core import google, scape, SERPER_API_KEY
from .core import google, scape, SERPER_API_KEY, deep_research
from .enums import SerperTools
from .schemas import (
SearchRequest,
Expand All @@ -15,10 +19,10 @@
LensRequest,
AutocorrectRequest,
PatentsRequest,
WebpageRequest
WebpageRequest,
DeepResearchRequest
)

load_dotenv()

server = Server("Serper")

Expand Down Expand Up @@ -57,6 +61,12 @@ async def list_tools() -> List[Tool]:
inputSchema=WebpageRequest.model_json_schema(),
))

tools.append(Tool(
name=SerperTools.DEEP_RESEARCH.value,
description="Performs a deep dive by searching 3 distinct angles (General, Technical, Reddit) in parallel and aggregating unique results.",
inputSchema=DeepResearchRequest.model_json_schema(),
))

return tools

@server.call_tool()
Expand All @@ -65,6 +75,12 @@ async def call_tool(name: str, arguments: dict[str, Any]) -> Sequence[TextConten
return [TextContent(text=f"SERPER_API_KEY is empty!", type="text")]

try:
# Handle Deep Research tool
if name == SerperTools.DEEP_RESEARCH.value:
request = DeepResearchRequest(**arguments)
result = await deep_research(request)
return [TextContent(text=json.dumps(result, indent=2), type="text")]

if name == SerperTools.WEBPAGE_SCRAPE.value:
request = WebpageRequest(**arguments)
result = await scape(request)
Expand All @@ -85,3 +101,6 @@ async def main():
options = server.create_initialization_options()
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, options)

if __name__ == "__main__":
asyncio.run(main())
Loading