-
Notifications
You must be signed in to change notification settings - Fork 5.8k
Add MCP Sampling Support to Enable Agentic MCP Tools #5116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Change CodexSamplingHandler to store Config instead of ModelClient - Create independent LLM calls for each sampling request - Use sampling request's systemPrompt (or empty if not provided) - Select model from model_preferences.hints if available - Skip system message in chat completions when instructions are empty Per MCP spec, sampling requests should use their own systemPrompt and model preferences, not inherit from the session's configuration. Fixes the issue where MCP sampling was using Codex's full agent instructions instead of the server's systemPrompt. Might close openai#4929
All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
An example MCP server (with sampling request) is required to test out the sampling feature. I will provide one for reference: experimental_use_rmcp_client = true
[mcp_servers.code-reviewer]
args = [
"-y",
"@mcpc-tech/cli",
"--config-url",
"https://raw.githubusercontent.com/mcpc-tech/mcpc/refs/heads/main/packages/cli/examples/configs/code-reviewer.json",
]
command = "npx"
env = { "MCPC_TRACING_ENABLED" = "true" }
startup_timeout_sec = 36
tool_timeout_sec = 3600 |
…s based on RMCP client usage
@gpeal, would you mind taking a look here? This would make codex the first CLI to support MCP sampling. |
Thanks for your work here. Unfortunately, this isn't something we can accept at this time. Although there are many valid use cases for this, we don't yet have an appropriate way to prevent people from abusing this to turn their ChatGPT Codex credits into generic completions API credits. We will continue to think through alternatives here and may revisit this in the future. |
Summary
Implements MCP
sampling/createMessage
capability, allowing MCP servers to make independent LLM calls through Codex with their own prompts and model preferences.Closes #4929, follows mcp sampling spec: https://modelcontextprotocol.io/specification/2025-06-18/client/sampling
Key Changes
New
CodexSamplingHandler
(codex-rs/core/src/mcp_sampling_handler.rs
)systemPrompt
andmodel_preferences.hints
Integration:
chat_completions.rs
: Skip system message when instructions are emptyrmcp-client
: AddedSamplingHandler
trait for delegationMcpConnectionManager
: Wires sampling handler and declares capabilityBenefits
Write mini agents inside Codex using MCP tools. See this code review agent demo below - it can also run in background.
Checks Passed
✅ cargo test -p codex-core
✅ just fix -p codex-core
✅ just fix -p codex-rmcp-client