A robust, cross-platform Rust library for managing Model Context Protocol (MCP) server processes with advanced retry logic, health monitoring, and platform-specific optimizations.
GPMCP Layer provides a high-level abstraction for spawning, managing, and communicating with MCP servers. It handles process lifecycle management, automatic reconnection with configurable retry strategies, and platform-specific optimizations for both Unix and Windows systems.
- Cross-Platform Support: Native implementations for Unix (Linux, macOS) and Windows
- Robust Process Management: Advanced process lifecycle management with graceful termination
- Configurable Retry Logic: Exponential backoff, jitter, and customizable retry strategies
- Health Monitoring: Built-in health checks and automatic recovery
- Transport Flexibility: Support for stdio and SSE (Server-Sent Events) transports
- Resource Management: Automatic cleanup and resource management
- Async/Await Support: Fully asynchronous API built on Tokio
The project is organized as a Rust workspace with the following crates:
├── gpmcp-layer/ # Main library with high-level API
├── gpmcp-layer-core/ # Platform-independent traits and configurations
├── gpmcp-layer-unix/ # Unix-specific process management
└── gpmcp-layer-windows/ # Windows-specific process management
Add this to your Cargo.toml
:
[dependencies]
gpmcp-layer = { git = "https://github.com/gpmcp/layer/" }
tokio = { version = "1.0", features = ["full"] }
use std::borrow::Cow;
use gpmcp_layer::{CallToolRequestParam, GpmcpLayer, RunnerConfig, Transport};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Configure the MCP server
let config = RunnerConfig::builder()
.name("my-mcp-server")
.version("1.0.0")
.command("python3")
.args(["server.py", "--port", "8080"])
.transport(Transport::Sse {
url: "http://localhost:8080/sse".to_string(),
})
.build()?;
// Create and connect the layer to the server
let layer = GpmcpLayer::new(config)?.connect().await?;
// List available tools
let tools = layer.list_tools().await?;
println!("Available tools: {:?}", tools);
// Call a tool
let request = CallToolRequestParam {
name: Cow::Borrowed("example_tool"),
arguments: serde_json::json!({"input": "test"}).as_object().cloned(),
};
let result = layer.call_tool(request).await?;
println!("Tool result: {:?}", result);
// Cleanup
layer.cancel().await?;
Ok(())
}
Practical examples are available in the examples/
directory:
- Simple StdIO Client (
examples/simple_stdio/
) - Basic GPMCP Layer usage demonstrating server connection, tool discovery, and communication patterns using StdIO transport. - Simple SSE Client (
examples/simple_sse/
) - Basic GPMCP Layer usage demonstrating server connection, tool discovery, and communication patterns using SSE transport. - Counter Server (
examples/test-mcp-server/
) - Sample MCP server implementing stateful counter operations with both StdIO and SSE transport support
See the examples README for detailed usage instructions.
new(config)
- Create a new GpmcpLayer instance with the given configurationconnect()
- Start and connect to the MCP serverlist_tools()
- Get available tools from the MCP servercall_tool(request)
- Execute a tool with parameterslist_prompts()
- Get available promptsget_prompt(request)
- Retrieve a specific promptlist_resources()
- Get available resourcesread_resource(request)
- Read a specific resourceis_healthy()
- Check server health with retriesis_healthy_quick()
- Quick health check without retriespeer_info()
- Get server informationcancel()
- Gracefully shutdown the layer
cargo build --workspace
For CI environments or when you only need specific platform support:
# Unix only (Linux, macOS)
cargo build --workspace --exclude gpmcp-layer-windows
# Windows only
cargo build --workspace --exclude gpmcp-layer-unix
# Run all tests
cargo test --workspace
# Platform-specific testing
cargo test --workspace --exclude gpmcp-layer-windows # Unix
cargo test --workspace --exclude gpmcp-layer-unix # Windows
# Integration tests
cargo test --test integration_tests
Platform | Status | Process Manager | Notes |
---|---|---|---|
Linux | ✅ Full | Unix | Native process groups, signals |
macOS | ✅ Full | Unix | Native process groups, signals |
Windows | ✅ Full | Windows | Job objects, process trees |
The library uses anyhow::Result
for error handling and provides detailed error context:
async fn call(layer: &GpmcpLayer, request: CallToolRequest) -> anyhow::Result<()> {
match layer.call_tool(request).await {
Ok(result) => println!("Success: {:?}", result),
Err(e) => {
eprintln!("Error: {}", e);
// Error chain provides detailed context
for cause in e.chain() {
eprintln!(" Caused by: {}", cause);
}
}
}
}
Enable logging to see detailed operation information:
use tracing_subscriber;
fn main() {
tracing_subscriber::fmt()
.with_env_filter("gpmcp_layer=debug")
.init();
}
- Process spawning is optimized for each platform
- Retry strategies use exponential backoff to avoid overwhelming servers
- Health checks are lightweight and non-blocking
- Resource cleanup is automatic and thorough
- Process isolation using platform-specific mechanisms
- Secure environment variable handling
- Proper cleanup of sensitive data
- No hardcoded credentials or secrets