This project provides a robust, asynchronous Python client for interacting with the official GitHub Model Context Protocol (MCP) server. It uses Google's Gemini Pro model for natural language understanding and tool selection.
This script demonstrates the complete, single-turn lifecycle of an MCP interaction:
- Initialization: Securely connects to the MCP server and establishes a session.
- Tool Discovery: Fetches and understands the full list of available tools from the server.
- LLM-Powered Reasoning: Sends a user's natural language prompt to Gemini, which intelligently decides if a tool should be used.
- Tool Execution: If a tool is chosen, the script executes it with the parameters provided by the LLM.
- Results: Displays the final output, either from the tool call or as a direct textual answer from the LLM.
- Asynchronous: Built with
asyncioandhttpxfor efficient, non-blocking network I/O. - Type-Safe: Intelligently formats tool schemas to match the requirements of the Gemini API, preventing common data type and enum errors.
- Environment-Based Configuration: Securely manages API keys using a
.envfile, following best practices. - Robust and Well-Structured: Encapsulated in a clean
MCP_Assistantclass with clear, single-purpose methods. - Notebook-Friendly: Includes a compatibility check to run seamlessly in environments like Jupyter notebooks.
Follow these steps to get the client up and running on your local machine.
- Python 3.9+
- A Google AI API Key for Gemini. You can get one from Google AI Studio.
- A GitHub Personal Access Token (classic). You can generate one here. It needs the following scopes:
read:userrepo(full control of private repositories)- (Optional)
gist,notificationsfor related tools.
If this project is in a Git repository, clone it to your local machine. Otherwise, simply create a new project directory.
git clone <your-repository-url>
cd <your-repository-directory>
It is highly recommended to use a virtual environment to manage project dependencies and avoid conflicts.
# Create a virtual environment named 'venv'
python3 -m venv venv
# Activate the virtual environment
# On macOS and Linux:
source venv/bin/activate
# On Windows:
.\venv\Scripts\activate
Create a requirements.txt file in your project directory with the following content:
google-generativeai
httpx
python-dotenv
nest-asyncio
Then, install these libraries using pip:
pip install -r requirements.txt
The script requires your API keys to authenticate with Google and GitHub. Create a file named .env in the root of the project directory.
touch .env
Now, open the .env file in a text editor and add your keys in the following format. Do not add quotes around your keys.
# Your private key for the Google Gemini API
GEMINI_API_KEY=AIzaSy...
# Your GitHub Personal Access Token (classic)
GITHUB_PAT=ghp_...
Important: This .env file contains sensitive credentials and should never be committed to version control. The repository should have a .gitignore file that includes .env.
With the setup complete, you can run the client from your terminal with a single command:
python3 main.py
The script will execute the following steps, printing debug information and the final result to your console:
- Initialize a session with the GitHub MCP server.
- Discover all available tools.
- Send the pre-defined prompt to Gemini.
- Execute the tool chosen by Gemini.
- Print the JSON response from the tool call or a direct text answer from the model.
To test different tools or scenarios, simply modify the prompt variable inside the main() function at the bottom of the main.py script.
Example: Finding Branches
# Inside the main() function in main.py
prompt = "List all branches in the 'torvalds/linux' repository."
await assistant.execute_single_turn(prompt)
Example: Direct Answer (No Tool Use)
# Inside the main() function in main.py
prompt = "Who is the owner of the 'microsoft/vscode' repository?"
await assistant.execute_single_turn(prompt)