Supercharge your AI assistant with local LLM access
An MCP (Model Context Protocol) server that exposes the complete Ollama SDK as MCP tools, enabling seamless integration between your local LLM models and MCP-compatible applications like Claude Desktop and Cline.
Features β’ Installation β’ Available Tools β’ Configuration β’ Retry Behavior β’ Development
- βοΈ Ollama Cloud Support - Full integration with Ollama's cloud platform
- π§ 14 Comprehensive Tools - Full access to Ollama's SDK functionality
- π Hot-Swap Architecture - Automatic tool discovery with zero-config
- π― Type-Safe - Built with TypeScript and Zod validation
- π High Test Coverage - 96%+ coverage with comprehensive test suite
- π Zero Dependencies - Minimal footprint, maximum performance
- π Drop-in Integration - Works with Claude Desktop, Cline, and other MCP clients
- π Web Search & Fetch - Real-time web search and content extraction via Ollama Cloud
- π Hybrid Mode - Use local and cloud models seamlessly in one server
This MCP server gives Claude the tools to interact with Ollama - but you'll get even more value by also installing the Ollama Skill from the Skillsforge Marketplace:
- π This MCP = The Car - All the tools and capabilities
- π Ollama Skill = Driving Lessons - Expert knowledge on how to use them effectively
The Ollama Skill teaches Claude:
- Best practices for model selection and configuration
- Optimal prompting strategies for different Ollama models
- When to use chat vs generate, embeddings, and other tools
- Performance optimization and troubleshooting
- Advanced features like tool calling and function support
Install both for the complete experience:
- β This MCP server (tools)
- β Ollama Skill (expertise)
Result: Claude doesn't just have the car - it knows how to drive! ποΈ
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"]
}
}
}npm install -g ollama-mcpAdd to your Cline MCP settings (cline_mcp_settings.json):
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"]
}
}
}| Tool | Description |
|---|---|
ollama_list |
List all available local models |
ollama_show |
Get detailed information about a specific model |
ollama_pull |
Download models from Ollama library |
ollama_push |
Push models to Ollama library |
ollama_copy |
Create a copy of an existing model |
ollama_delete |
Remove models from local storage |
ollama_create |
Create custom models from Modelfile |
| Tool | Description |
|---|---|
ollama_ps |
List currently running models |
ollama_generate |
Generate text completions |
ollama_chat |
Interactive chat with models (supports tools/functions) |
ollama_embed |
Generate embeddings for text |
| Tool | Description |
|---|---|
ollama_web_search |
Search the web with customizable result limits (requires OLLAMA_API_KEY) |
ollama_web_fetch |
Fetch and parse web page content (requires OLLAMA_API_KEY) |
Note: Web tools require an Ollama Cloud API key. They connect to
https://ollama.com/apifor web search and fetch operations.
| Variable | Default | Description |
|---|---|---|
OLLAMA_HOST |
http://127.0.0.1:11434 |
Ollama server endpoint (use https://ollama.com for cloud) |
OLLAMA_API_KEY |
- | API key for Ollama Cloud (required for web tools and cloud models) |
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"],
"env": {
"OLLAMA_HOST": "http://localhost:11434"
}
}
}
}To use Ollama's cloud platform with web search and fetch capabilities:
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"],
"env": {
"OLLAMA_HOST": "https://ollama.com",
"OLLAMA_API_KEY": "your-ollama-cloud-api-key"
}
}
}
}Cloud Features:
- βοΈ Access cloud-hosted models
- π Web search with
ollama_web_search(requires API key) - π Web fetch with
ollama_web_fetch(requires API key) - π Faster inference on cloud infrastructure
Get your API key: Visit ollama.com to sign up and obtain your API key.
You can use both local and cloud models by pointing to your local Ollama instance while providing an API key:
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"],
"env": {
"OLLAMA_HOST": "http://127.0.0.1:11434",
"OLLAMA_API_KEY": "your-ollama-cloud-api-key"
}
}
}
}This configuration:
- β Runs local models from your Ollama instance
- β Enables cloud-only web search and fetch tools
- β Best of both worlds: privacy + web connectivity
The MCP server includes intelligent retry logic for handling transient failures when communicating with Ollama APIs:
Web Tools (ollama_web_search and ollama_web_fetch):
- Automatically retry on rate limit errors (HTTP 429)
- Maximum of 3 retry attempts (4 total requests including initial)
- Request timeout: 30 seconds per request (prevents hung connections)
- Respects the
Retry-Afterheader when provided by the API - Falls back to exponential backoff with jitter when
Retry-Afteris not present
The server intelligently handles the standard HTTP Retry-After header in two formats:
1. Delay-Seconds Format:
Retry-After: 60
Waits exactly 60 seconds before retrying.
2. HTTP-Date Format:
Retry-After: Wed, 21 Oct 2025 07:28:00 GMT
Calculates delay until the specified timestamp.
When Retry-After is not provided or invalid:
- Initial delay: 1 second (default)
- Maximum delay: 10 seconds (default, configurable)
- Strategy: Exponential backoff with full jitter
- Formula:
random(0, min(initialDelay Γ 2^attempt, maxDelay))
Example retry delays:
- 1st retry: 0-1 seconds
- 2nd retry: 0-2 seconds
- 3rd retry: 0-4 seconds (capped at 0-10s max)
Retried Errors (transient failures):
- HTTP 429 (Too Many Requests) - rate limiting
- HTTP 500 (Internal Server Error) - transient server issues
- HTTP 502 (Bad Gateway) - gateway/proxy received invalid response
- HTTP 503 (Service Unavailable) - server temporarily unable to handle request
- HTTP 504 (Gateway Timeout) - gateway/proxy did not receive timely response
Non-Retried Errors (permanent failures):
- Request timeouts (30 second limit exceeded)
- Network timeouts (no status code)
- Abort/cancel errors
- HTTP 4xx errors (except 429) - client errors requiring changes
- Other HTTP 5xx errors (501, 505, 506, 508, etc.) - configuration/implementation issues
The retry mechanism ensures robust handling of temporary API issues while respecting server-provided retry guidance and preventing excessive request rates. Transient 5xx errors (500, 502, 503, 504) are safe to retry for the idempotent POST operations used by ollama_web_search and ollama_web_fetch. Individual requests timeout after 30 seconds to prevent indefinitely hung connections.
// MCP clients can invoke:
{
"tool": "ollama_chat",
"arguments": {
"model": "llama3.2:latest",
"messages": [
{ "role": "user", "content": "Explain quantum computing" }
]
}
}{
"tool": "ollama_embed",
"arguments": {
"model": "nomic-embed-text",
"input": ["Hello world", "Embeddings are great"]
}
}{
"tool": "ollama_web_search",
"arguments": {
"query": "latest AI developments",
"max_results": 5
}
}This server uses a hot-swap autoloader pattern:
src/
βββ index.ts # Entry point (27 lines)
βββ server.ts # MCP server creation
βββ autoloader.ts # Dynamic tool discovery
βββ tools/ # Tool implementations
βββ chat.ts # Each exports toolDefinition
βββ generate.ts
βββ ...
Key Benefits:
- Add new tools by dropping files in
src/tools/ - Zero server code changes required
- Each tool is independently testable
- 100% function coverage on all tools
- Node.js v16+
- npm or pnpm
- Ollama running locally
# Clone repository
git clone https://github.com/rawveg/ollama-mcp.git
cd ollama-mcp
# Install dependencies
npm install
# Build project
npm run build
# Run tests
npm test
# Run tests with coverage
npm run test:coverageStatements : 96.37%
Branches : 84.82%
Functions : 100%
Lines : 96.37%
- Create
src/tools/your-tool.ts:
import { ToolDefinition } from '../autoloader.js';
import { Ollama } from 'ollama';
import { ResponseFormat } from '../types.js';
export const toolDefinition: ToolDefinition = {
name: 'ollama_your_tool',
description: 'Your tool description',
inputSchema: {
type: 'object',
properties: {
param: { type: 'string' }
},
required: ['param']
},
handler: async (ollama, args, format) => {
// Implementation
return 'result';
}
};- Create tests in
tests/tools/your-tool.test.ts - Done! The autoloader discovers it automatically.
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Write tests - We maintain 96%+ coverage
- Commit with clear messages (
git commit -m 'Add amazing feature') - Push to your branch (
git push origin feature/amazing-feature) - Open a Pull Request
- All new tools must export
toolDefinition - Maintain β₯80% test coverage
- Follow existing TypeScript patterns
- Use Zod schemas for input validation
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
See LICENSE for details.
- Skillsforge Marketplace - Claude Code skills including the Ollama Skill
- Ollama - Get up and running with large language models locally
- Model Context Protocol - Open standard for AI assistant integration
- Claude Desktop - Anthropic's desktop application
- Cline - VS Code AI assistant
Built with:
- Ollama SDK - Official Ollama JavaScript library
- MCP SDK - Model Context Protocol SDK
- Zod - TypeScript-first schema validation
Made with β€οΈ by Tim Green