Skip to content

A general MCP (Model Context Protocol) Client implementation that allows applications to provide context for LLMs in a standardized way

License

Notifications You must be signed in to change notification settings

flyer103/mcp-client

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MCP Client

A general MCP (Model Context Protocol) Client implementation that allows applications to provide context for LLMs in a standardized way.

Overview

This client implements the Model Context Protocol (MCP) to separate the concerns of providing context from the actual LLM interaction. The client can connect to any MCP server and leverage tools through the MCP Python SDK.

For an understanding of MCP, see: MCP Introduction

Features

  • Multiple Connection Methods: Connect to MCP servers via stdio or SSE (Server-Sent Events)
  • OpenAI Function Calling: Automatically select and execute tools using LLM
  • Custom OpenAI Endpoints: Configure custom API endpoints for OpenAI-compatible services
  • Configurable LLM Parameters: Control temperature, max tokens, and other model parameters
  • MCP Server Configuration: Use JSON configuration files to manage server connections

Installation

Requirements

  • Python 3.11 or higher
  • uv for package management
  • MCP Python library

Install the MCP Client

# Create a virtual environment and install the client
uv venv
uv pip install -e .

Alternative Installation with pip

# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install the package
pip install -e .

Install Example MCP Server (Optional)

The repository includes an example weather MCP server for testing:

# Navigate to the example server directory
cd mcp-server-example

# Install the server dependencies
uv pip install -e .

# Or alternatively, just run the server directly
# python weather.py

Usage

MCP Server Configuration

MCP servers can be configured using a JSON configuration file:

{
    "mcpServers": {
        "weather": {
            "command": "uv",
            "args": [
                "--directory",
                "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
                "run",
                "weather.py"
            ]
        }
    }
}

Basic Usage

from mcp_client import MCPClient, connect_stdio, connect_sse, execute_with_tool, close

# Initialize the client
client = MCPClient(
    api_key="your-openai-api-key",  # or use OPENAI_API_KEY env var
    base_url="https://api.openai.com/v1",  # optional
    max_retries=2,  # optional
    timeout=60.0,  # optional
    model="gpt-4"  # optional, default model for function calling
)

# Connect to an MCP server via stdio
connect_stdio(client, "python", ["server.py"])

# Or connect via SSE (Server-Sent Events)
connect_sse(client, "localhost", 8080)

# Use tools provided by the server
response = execute_with_tool(client, "What's the weather?", "get_weather", {"location": "New York"})

# Use OpenAI function calling to automatically select and execute tools
# This will use the default model specified during initialization
response = client.execute_with_function_calling(
    prompt="What's the weather in New York?",
    temperature=0.7,  # optional
    max_tokens=None  # optional
)

# Or override the model for a specific call
response = client.execute_with_function_calling(
    prompt="What's the weather in New York?",
    model="gpt-3.5-turbo",  # overrides the default model
    temperature=0.7,  # optional
    max_tokens=None  # optional
)

# Close the connection
close(client)

Configuration File Usage Methods

There are four ways to use configuration files with MCP:

1. Direct Constructor Integration

from mcp_client import MCPClient

# Auto-connect using config file (prompts for server selection if multiple)
client = MCPClient(
    api_key="your-openai-api-key",
    mcp_config_file="mcp_config.json"
)

# Connect to a specific server from config with custom model
client = MCPClient(
    api_key="your-openai-api-key",
    mcp_config_file="mcp_config.json",
    server_id="notion-mcp",
    model="gpt-3.5-turbo"
)

# With custom parameters
client = MCPClient(
    api_key="your-openai-api-key",
    base_url="https://your-custom-endpoint.com/v1",
    max_retries=3,
    timeout=120.0,
    model="gpt-4-turbo"
)

# List available servers from config
servers = client.list_available_servers()

2. Programmatic Helper Functions

import json
from mcp_client import MCPClient, connect_stdio, connect_to_server_by_id, connect_using_config

# Create client
client = MCPClient(api_key="your-openai-api-key")

# Method 1: Connect using configuration file directly 
# (will prompt user to select if multiple servers)
connected = connect_using_config(client, "mcp_config.json")

# Method 2: Connect using configuration file with specific server ID
connected = connect_using_config(client, "mcp_config.json", "notion-mcp")

# Method 3: Connect using server ID and config file
connected = connect_to_server_by_id(client, "notion-mcp", "mcp_config.json")

# Method 4: Connect using extracted config details
with open('mcp_config.json', 'r') as f:
    config = json.load(f)
    server_id = "notion-mcp"
    server_config = config["mcpServers"][server_id]
    connect_stdio(
        client, 
        server_config["command"], 
        server_config["args"], 
        server_config["env"]
    )

Command Line Usage

Simple Client

# Connect to a server via SSE and list available tools
python -m mcp_client.examples.simple_client --server localhost:8080

# Connect to a server via stdio and list available tools
python -m mcp_client.examples.simple_client --server "stdio:python?args=server.py"

# Connect to a server using a configuration file directly
python -m mcp_client.examples.simple_client --mcp-config mcp_config.json

# Execute a specific tool with a prompt
python -m mcp_client.examples.simple_client --server localhost:8080 --tool calculator --prompt "Calculate 5 + 3"

# Use function calling to automatically select and execute tools
python -m mcp_client.examples.simple_client --server localhost:8080 --prompt "What's the weather in New York?" --api-key your_openai_api_key

# Use a custom OpenAI API endpoint
python -m mcp_client.examples.simple_client --server localhost:8080 --prompt "Calculate 2 + 2" --api-key your_openai_api_key --base-url "https://your-custom-endpoint.com/v1"

Example Weather Server

# Run the server
cd mcp-server-example
python weather.py

# In another terminal, connect to it using the client
python -m mcp_client.examples.simple_client --server "stdio:python?args=./mcp-server-example/weather.py" --prompt "What's the weather in New York?" --api-key your_openai_api_key --base-url your_openai_base_url --model your_openai_model

# Or you can use mcp server config file
python -m mcp_client.examples.simple_client --mcp-config ./mcp-server-example/mcp-server-example.json --prompt "What's the weather in New York?" --api-key your_openai_api_key --base-url your_openai_base_url --model your_openai_model

Connection Methods

The MCP client supports four connection methods:

Direct Configuration File

For connecting to a server defined in a JSON configuration file (will prompt for selection if multiple servers):

--mcp-config <path-to-config>

Example: --mcp-config mcp_config.json

Server ID from Configuration File

For connecting to a specific server defined in a JSON configuration file:

--server-id <server-id> --mcp-config <path-to-config>

Example: --server-id notion-mcp --mcp-config mcp_config.json

SSE Connection (Server-Sent Events)

For connecting to a server running on a host and port:

--server host:port

Example: --server localhost:8080

Stdio Connection

For launching and connecting to a server process via standard input/output:

--server "stdio:command?args=arg1,arg2"

Example: --server "stdio:python?args=server.py,-v"

OpenAI Configuration Options

The client supports various OpenAI configuration options:

Option Description Default
--api-key OpenAI API key OPENAI_API_KEY env var
--base-url Custom API endpoint OPENAI_BASE_URL env var
--model Model name to use gpt-4
--temperature Temperature for sampling 0.7
--max-tokens Maximum tokens to generate None (default model limit)

Using Different API Providers

The MCP Client can work with different API providers that offer OpenAI-compatible endpoints:

OpenAI (Default)

client = MCPClient(
    api_key="your-openai-api-key",
    base_url="https://api.openai.com/v1"  # default
)

Alibaba Cloud DashScope

client = MCPClient(
    api_key="your-dashscope-api-key",
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)

DashScope/Qwen models support function calling similar to OpenAI. The client automatically configures the API request to include available tools. For best results, use Qwen-Plus, Qwen-Max, Qwen-Turbo, or Qwen2.5 models.

Anthropic (Claude)

client = MCPClient(
    api_key="your-anthropic-api-key",
    base_url="https://api.anthropic.com/v1"
)

Microsoft Azure OpenAI

client = MCPClient(
    api_key="your-azure-api-key",
    base_url="https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}"
)

The client automatically detects the API provider based on the base URL and adapts its functionality accordingly.

MCPClient Parameters

The MCPClient class accepts the following parameters during initialization:

Parameter Description Default Required
api_key OpenAI API key for authentication OPENAI_API_KEY env var Yes, if not set in env
base_url Custom API endpoint URL OPENAI_BASE_URL env var No
max_retries Maximum number of retry attempts for API calls 2 No
timeout Timeout in seconds for API calls 60.0 No
mcp_config_file Path to MCP server configuration file None No
server_id ID of server to connect to from config file None No
model Default OpenAI model to use for function calling "gpt-4" No

Example of using all parameters:

client = MCPClient(
    api_key="your-openai-api-key",
    base_url="https://custom-api-endpoint.com/v1",
    max_retries=3,
    timeout=120.0,
    mcp_config_file="path/to/mcp_config.json",
    server_id="weather-server",
    model="gpt-4-turbo"
)

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

About

A general MCP (Model Context Protocol) Client implementation that allows applications to provide context for LLMs in a standardized way

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published