Skip to content

Xenomorphing19/MCP-playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

GitHub MCP Client with Google Gemini

This project provides a robust, asynchronous Python client for interacting with the official GitHub Model Context Protocol (MCP) server. It uses Google's Gemini Pro model for natural language understanding and tool selection.

This script demonstrates the complete, single-turn lifecycle of an MCP interaction:

  1. Initialization: Securely connects to the MCP server and establishes a session.
  2. Tool Discovery: Fetches and understands the full list of available tools from the server.
  3. LLM-Powered Reasoning: Sends a user's natural language prompt to Gemini, which intelligently decides if a tool should be used.
  4. Tool Execution: If a tool is chosen, the script executes it with the parameters provided by the LLM.
  5. Results: Displays the final output, either from the tool call or as a direct textual answer from the LLM.

Features

  • Asynchronous: Built with asyncio and httpx for efficient, non-blocking network I/O.
  • Type-Safe: Intelligently formats tool schemas to match the requirements of the Gemini API, preventing common data type and enum errors.
  • Environment-Based Configuration: Securely manages API keys using a .env file, following best practices.
  • Robust and Well-Structured: Encapsulated in a clean MCP_Assistant class with clear, single-purpose methods.
  • Notebook-Friendly: Includes a compatibility check to run seamlessly in environments like Jupyter notebooks.

Setup and Installation

Follow these steps to get the client up and running on your local machine.

1. Prerequisites

  • Python 3.9+
  • A Google AI API Key for Gemini. You can get one from Google AI Studio.
  • A GitHub Personal Access Token (classic). You can generate one here. It needs the following scopes:
    • read:user
    • repo (full control of private repositories)
    • (Optional) gist, notifications for related tools.

2. Clone the Repository

If this project is in a Git repository, clone it to your local machine. Otherwise, simply create a new project directory.

git clone <your-repository-url>
cd <your-repository-directory>

3. Set Up a Virtual Environment

It is highly recommended to use a virtual environment to manage project dependencies and avoid conflicts.

# Create a virtual environment named 'venv'
python3 -m venv venv

# Activate the virtual environment
# On macOS and Linux:
source venv/bin/activate
# On Windows:
.\venv\Scripts\activate

4. Install Dependencies

Create a requirements.txt file in your project directory with the following content:

google-generativeai
httpx
python-dotenv
nest-asyncio

Then, install these libraries using pip:

pip install -r requirements.txt

5. Create the Environment File (.env)

The script requires your API keys to authenticate with Google and GitHub. Create a file named .env in the root of the project directory.

touch .env

Now, open the .env file in a text editor and add your keys in the following format. Do not add quotes around your keys.

# Your private key for the Google Gemini API
GEMINI_API_KEY=AIzaSy...

# Your GitHub Personal Access Token (classic)
GITHUB_PAT=ghp_...

Important: This .env file contains sensitive credentials and should never be committed to version control. The repository should have a .gitignore file that includes .env.

Running the Client

With the setup complete, you can run the client from your terminal with a single command:

python3 main.py

The script will execute the following steps, printing debug information and the final result to your console:

  1. Initialize a session with the GitHub MCP server.
  2. Discover all available tools.
  3. Send the pre-defined prompt to Gemini.
  4. Execute the tool chosen by Gemini.
  5. Print the JSON response from the tool call or a direct text answer from the model.

Customizing the Prompt

To test different tools or scenarios, simply modify the prompt variable inside the main() function at the bottom of the main.py script.

Example: Finding Branches

# Inside the main() function in main.py
prompt = "List all branches in the 'torvalds/linux' repository."
await assistant.execute_single_turn(prompt)

Example: Direct Answer (No Tool Use)

# Inside the main() function in main.py
prompt = "Who is the owner of the 'microsoft/vscode' repository?"
await assistant.execute_single_turn(prompt)

About

A simple MCP playground to familiarise with the concept

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages