██╗ ███╗ ███╗███████╗████████╗██╗ ██╗██████╗ ██╗ ██████╗ ███╗ ███╗ ██████╗██████╗
██║ ████╗ ████║██╔════╝╚══██╔══╝██║ ██║██╔══██╗██║██╔═══██╗ ████╗ ████║██╔════╝██╔══██╗
██║ ██╔████╔██║███████╗ ██║ ██║ ██║██║ ██║██║██║ ██║ ██╔████╔██║██║ ██████╔╝
██║ ██║╚██╔╝██║╚════██║ ██║ ██║ ██║██║ ██║██║██║ ██║ ██║╚██╔╝██║██║ ██╔═══╝
███████╗██║ ╚═╝ ██║███████║ ██║ ╚██████╔╝██████╔╝██║╚██████╔╝ ██║ ╚═╝ ██║╚██████╗██║
╚══════╝╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝╚═╝
░░░█ LM Studio MCP Bridge for Claude Code █░░░A robust, MCP (Model Context Protocol) bridge for LMStudio that enables Claude Code and Desktop to leverage local LLMs for code assistance tasks. Made specifically to use gpt-oss-20b model locally and offset token usage for Claude Code.
- 6 Specialized Tools for code assistance
- Request validation with Zod schemas and security checks
- Intelligent caching to reduce redundant LLM calls
- Rate limiting to prevent overload
- Graceful shutdown handling
- Structured logging with Winston
- TypeScript for type safety
- generate_docs - Generate comprehensive documentation with multiple styles (technical/user/API)
- summarize - Create concise summaries with configurable length and format
- extract_tags - Extract semantic tags and keywords from code
- analyze_code - Perform security, performance, style, and bug analysis
- refactor_suggestions - Suggest code improvements for readability/performance/maintainability
- generate_tests - Generate comprehensive test cases with coverage targets
- Node.js 20+
- LM Studio running with a model loaded
- LM Studio API server enabled (localhost:1234)
# Run the automated setup
chmod +x setup.sh
./setup.shThis will:
- Check prerequisites
- Install dependencies
- Build TypeScript
- Create configuration files
- Run health checks
- Display Claude Desktop integration instructions
# Install dependencies
npm install
# Copy environment configuration
cp .env.example .env
# Build TypeScript
npm run build
# Test connection to LM Studio
npm run health
# Start the server
npm start# LM Studio Settings
LM_STUDIO_URL=http://localhost:1234 # LM Studio API endpoint
LM_STUDIO_TIMEOUT=30000 # Request timeout (ms)
LM_STUDIO_MAX_RETRIES=3 # Retry attempts on failure
LM_STUDIO_MODEL=openai/gpt-oss-20b # Model to use
LMSERVER_NAME=LMStudio # MCP Server Name
# Server Settings
LOG_LEVEL=info # error|warn|info|debug
GRACEFUL_SHUTDOWN_TIMEOUT=5000 # Shutdown grace period (ms)
# Performance Tuning
RATE_LIMIT_MAX_REQUESTS=60 # Max requests per minute
RATE_LIMIT_MAX_CONCURRENT=10 # Max concurrent requests
CACHE_TTL=300 # Cache duration (seconds)You can enable the MCP server either per‑project or user‑wide. After editing config, restart Claude Code/Claude Desktop.
- Project‑scoped (recommended)
- Option A:
.claude/settings.local.jsonin your project
{
"mcpServers": {
"LMStudio": {
"type": "stdio",
"command": "bash",
"args": ["-lc", "./LMStudio-MCP/start.sh"],
"env": {
"NODE_ENV": "production",
"LM_STUDIO_URL": "http://localhost:1234",
"LM_STUDIO_MODEL": "openai/gpt-oss-20b"
}
}
}
}- Option B:
.mcp.jsonat the project root
{
"mcpServers": {
"LMStudio": {
"type": "stdio",
"command": "bash",
"args": ["-lc", "./LMStudio-MCP/start.sh"]
}
}
}- User‑scoped (global)
Add to ~/.claude.json top‑level mcpServers:
{
"mcpServers": {
"LMStudio": {
"type": "stdio",
"command": "bash",
"args": ["-lc", "/Users/you/path/to/ClaudeCodeSetup/LMStudio-MCP/start.sh"],
"env": {
"NODE_ENV": "production",
"LM_STUDIO_URL": "http://localhost:1234",
"LM_STUDIO_MODEL": "openai/gpt-oss-20b"
}
}
}
}Note: Restart Claude Code/Claude Desktop after changes so the MCP list refreshes.
# Check server health
curl http://localhost:9090/health
# View logs
tail -f logs/combined.log
# Monitor errors only
tail -f logs/error.log┌─────────────────────────────────────────┐
│ Claude Code │
│ (MCP Client) │
└────────────┬────────────────────────────┘
│ MCP Protocol (stdio)
▼
┌─────────────────────────────────────────┐
│ LM Studio MCP Bridge │
│ ┌──────────────────────────────┐ │
│ │ Request Validation Layer │ │
│ ├──────────────────────────────┤ │
│ │ Rate Limiter & Cache │ │
│ ├──────────────────────────────┤ │
│ │ Tool Execution Engine │ │
│ ├──────────────────────────────┤ │
│ │ Error Handler & Retry │ │
│ └──────────────────────────────┘ │
└────────────┬────────────────────────────┘
│ OpenAI-compatible API
▼
┌─────────────────────────────────────────┐
│ LM Studio │
│ (Local LLM - GPT/Llama/etc) │
└─────────────────────────────────────────┘
# Install PM2
npm install -g pm2
# Start with PM2
pm2 start dist/index.js --name lmstudio-mcp \
--max-memory-restart 500M \
--error logs/pm2-error.log \
--out logs/pm2-out.log
# Monitor
pm2 monit lmstudio-mcpMIT