Skip to content

Commit 0b71a56

Browse files
authored
[3/n] Add an MCP stdio example (#322)
### Summary: Spins up a stdio server with some local files, then asks the model questions. ### Test Plan: Run the example, see it work. --- [//]: # (BEGIN SAPLING FOOTER) * #324 * __->__ #322 * #336
2 parents f99703e + 8a06e6c commit 0b71a56

File tree

11 files changed

+183
-103
lines changed

11 files changed

+183
-103
lines changed

docs/mcp.md

+51
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Model context protocol
2+
3+
The [Model context protocol](https://modelcontextprotocol.io/introduction) (aka MCP) is a way to provide tools and context to the LLM. From the MCP docs:
4+
5+
> MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
6+
7+
The Agents SDK has support for MCP. This enables you to use a wide range of MCP servers to provide tools to your Agents.
8+
9+
## MCP servers
10+
11+
Currently, the MCP spec defines two kinds of servers, based on the transport mechanism they use:
12+
13+
1. **stdio** servers run as a subprocess of your application. You can think of them as running "locally".
14+
2. **HTTP over SSE** servers run remotely. You connect to them via a URL.
15+
16+
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse] classes to connect to these servers.
17+
18+
For example, this is how you'd use the [official MCP filesystem server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem).
19+
20+
```python
21+
async with MCPServerStdio(
22+
params={
23+
"command": "npx",
24+
"args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir],
25+
}
26+
) as server:
27+
tools = await server.list_tools()
28+
```
29+
30+
## Using MCP servers
31+
32+
MCP servers can be added to Agents. The Agents SDK will call `list_tools()` on the MCP servers each time the Agent is run. This makes the LLM aware of the MCP server's tools. When the LLM calls a tool from an MCP server, the SDK calls `call_tool()` on that server.
33+
34+
```python
35+
36+
agent=Agent(
37+
name="Assistant",
38+
instructions="Use the tools to achieve the task",
39+
mcp_servers=[mcp_server_1, mcp_server_2]
40+
)
41+
```
42+
43+
## Caching
44+
45+
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to both [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse]. You should only do this if you're certain the tool list will not change.
46+
47+
If you want to invalidate the cache, you can call `invalidate_tools_cache()` on the servers.
48+
49+
## End-to-end example
50+
51+
View complete working examples at [examples/mcp](https://github.com/openai/openai-agents-python/tree/main/examples/mcp).

docs/ref/mcp/server.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `MCP Servers`
2+
3+
::: agents.mcp.server

docs/ref/mcp/util.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `MCP Util`
2+
3+
::: agents.mcp.util
+26
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# MCP Filesystem Example
2+
3+
This example uses the [fileystem MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), running locally via `npx`.
4+
5+
Run it via:
6+
7+
```
8+
uv run python python examples/mcp/filesystem_example/main.py
9+
```
10+
11+
## Details
12+
13+
The example uses the `MCPServerStdio` class from `agents`, with the command:
14+
15+
```bash
16+
npx -y "@modelcontextprotocol/server-filesystem" <samples_directory>
17+
```
18+
19+
It's only given access to the `sample_files` directory adjacent to the example, which contains some sample data.
20+
21+
Under the hood:
22+
23+
1. The server is spun up in a subprocess, and exposes a bunch of tools like `list_directory()`, `read_file()`, etc.
24+
2. We add the server instance to the Agent via `mcp_agents`.
25+
3. Each time the agent runs, we call out to the MCP server to fetch the list of tools via `server.list_tools()`.
26+
4. If the LLM chooses to use an MCP tool, we call the MCP server to run the tool via `server.run_tool()`.
+54
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
import asyncio
2+
import os
3+
import shutil
4+
5+
from agents import Agent, Runner, trace
6+
from agents.mcp import MCPServer, MCPServerStdio
7+
8+
9+
async def run(mcp_server: MCPServer):
10+
agent = Agent(
11+
name="Assistant",
12+
instructions="Use the tools to read the filesystem and answer questions based on those files.",
13+
mcp_servers=[mcp_server],
14+
)
15+
16+
# List the files it can read
17+
message = "Read the files and list them."
18+
print(f"Running: {message}")
19+
result = await Runner.run(starting_agent=agent, input=message)
20+
print(result.final_output)
21+
22+
# Ask about books
23+
message = "What is my #1 favorite book?"
24+
print(f"\n\nRunning: {message}")
25+
result = await Runner.run(starting_agent=agent, input=message)
26+
print(result.final_output)
27+
28+
# Ask a question that reads then reasons.
29+
message = "Look at my favorite songs. Suggest one new song that I might like."
30+
print(f"\n\nRunning: {message}")
31+
result = await Runner.run(starting_agent=agent, input=message)
32+
print(result.final_output)
33+
34+
35+
async def main():
36+
current_dir = os.path.dirname(os.path.abspath(__file__))
37+
samples_dir = os.path.join(current_dir, "sample_files")
38+
39+
async with MCPServerStdio(
40+
params={
41+
"command": "npx",
42+
"args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir],
43+
}
44+
) as server:
45+
with trace(workflow_name="MCP Filesystem Example"):
46+
await run(server)
47+
48+
49+
if __name__ == "__main__":
50+
# Let's make sure the user has npx installed
51+
if not shutil.which("npx"):
52+
raise RuntimeError("npx is not installed. Please install it with `npm install -g npx`.")
53+
54+
asyncio.run(main())
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
1. To Kill a Mockingbird – Harper Lee
2+
2. Pride and Prejudice – Jane Austen
3+
3. 1984 – George Orwell
4+
4. The Hobbit – J.R.R. Tolkien
5+
5. Harry Potter and the Sorcerer’s Stone – J.K. Rowling
6+
6. The Great Gatsby – F. Scott Fitzgerald
7+
7. Charlotte’s Web – E.B. White
8+
8. Anne of Green Gables – Lucy Maud Montgomery
9+
9. The Alchemist – Paulo Coelho
10+
10. Little Women – Louisa May Alcott
11+
11. The Catcher in the Rye – J.D. Salinger
12+
12. Animal Farm – George Orwell
13+
13. The Chronicles of Narnia: The Lion, the Witch, and the Wardrobe – C.S. Lewis
14+
14. The Book Thief – Markus Zusak
15+
15. A Wrinkle in Time – Madeleine L’Engle
16+
16. The Secret Garden – Frances Hodgson Burnett
17+
17. Moby-Dick – Herman Melville
18+
18. Fahrenheit 451 – Ray Bradbury
19+
19. Jane Eyre – Charlotte Brontë
20+
20. The Little Prince – Antoine de Saint-Exupéry
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
- In the summer, I love visiting London.
2+
- In the winter, Tokyo is great.
3+
- In the spring, San Francisco.
4+
- In the fall, New York is the best.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
1. "Here Comes the Sun" – The Beatles
2+
2. "Imagine" – John Lennon
3+
3. "Bohemian Rhapsody" – Queen
4+
4. "Shake It Off" – Taylor Swift
5+
5. "Billie Jean" – Michael Jackson
6+
6. "Uptown Funk" – Mark Ronson ft. Bruno Mars
7+
7. "Don’t Stop Believin’" – Journey
8+
8. "Dancing Queen" – ABBA
9+
9. "Happy" – Pharrell Williams
10+
10. "Wonderwall" – Oasis

mkdocs.yml

+5
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ nav:
2828
- results.md
2929
- streaming.md
3030
- tools.md
31+
- mcp.md
3132
- handoffs.md
3233
- tracing.md
3334
- context.md
@@ -60,6 +61,8 @@ nav:
6061
- ref/models/interface.md
6162
- ref/models/openai_chatcompletions.md
6263
- ref/models/openai_responses.md
64+
- ref/mcp/server.md
65+
- ref/mcp/util.md
6366
- Tracing:
6467
- ref/tracing/index.md
6568
- ref/tracing/create.md
@@ -107,6 +110,8 @@ plugins:
107110
show_signature_annotations: true
108111
# Makes the font sizes nicer
109112
heading_level: 3
113+
# Show inherited members
114+
inherited_members: true
110115

111116
extra:
112117
# Remove material generation message in footer

src/agents/mcp/mcp_util.py

-94
This file was deleted.

src/agents/mcp/server.py

+7-9
Original file line numberDiff line numberDiff line change
@@ -175,10 +175,10 @@ def __init__(self, params: MCPServerStdioParams, cache_tools_list: bool = False)
175175
"""Create a new MCP server based on the stdio transport.
176176
177177
Args:
178-
params: The params that configure the server. This includes:
179-
- The command (e.g. `python` or `node`) that starts the server.
180-
- The args to pass to the server command (e.g. `foo.py` or `server.js`).
181-
- The environment variables to set for the server.
178+
params: The params that configure the server. This includes the command to run to
179+
start the server, the args to pass to the command, the environment variables to
180+
set for the server, the working directory to use when spawning the process, and
181+
the text encoding used when sending/receiving messages to the server.
182182
cache_tools_list: Whether to cache the tools list. If `True`, the tools list will be
183183
cached and only fetched from the server once. If `False`, the tools list will be
184184
fetched from the server on each call to `list_tools()`. The cache can be
@@ -235,11 +235,9 @@ def __init__(self, params: MCPServerSseParams, cache_tools_list: bool = False):
235235
"""Create a new MCP server based on the HTTP with SSE transport.
236236
237237
Args:
238-
params: The params that configure the server. This includes:
239-
- The URL of the server.
240-
- The headers to send to the server.
241-
- The timeout for the HTTP request.
242-
- The timeout for the SSE connection.
238+
params: The params that configure the server. This includes the URL of the server,
239+
the headers to send to the server, the timeout for the HTTP request, and the
240+
timeout for the SSE connection.
243241
244242
cache_tools_list: Whether to cache the tools list. If `True`, the tools list will be
245243
cached and only fetched from the server once. If `False`, the tools list will be

0 commit comments

Comments
 (0)