Skip to content

Conversation

@khuynh22
Copy link

This pull request introduces major enhancements to the LLM Council project, most notably by adding support for running as a Model Context Protocol (MCP) server in addition to the existing web application. It also provides comprehensive documentation for both usage modes, improves developer onboarding, and updates configuration and dependency files to support these new capabilities.

The most important changes are:

MCP Server Support and Usage

  • Added MCP server support, allowing LLM Council to be used as a tool in Claude Desktop, VS Code, or any MCP-compatible client. This includes a new script entry point (llm-council-mcp) and detailed setup/configuration instructions in both the README.md and a new MCP_QUICKSTART.md guide. [1] [2] [3]
  • Provided an example MCP configuration file (mcp-config.example.json) for easy integration with MCP clients.

Documentation and Developer Guidance

  • Added an extensive set of AI coding agent instructions in .github/copilot-instructions.md, outlining architecture, development workflow, code conventions, and project philosophy for contributors and AI agents.
  • Updated the main README.md to clearly explain both web app and MCP server modes, setup steps, and usage examples for each. [1] [2]

Configuration and Environment

  • Introduced .env.example with instructions for obtaining and setting the OpenRouter API key, improving onboarding for new users.

Package and Dependency Updates

  • Updated pyproject.toml to include the mcp dependency, new script entry points for both web and MCP server modes, and build configuration for packaging. [1] [2]
  • Minor updates to frontend/package-lock.json to mark several dependencies as peer dependencies, improving package management. [1] [2] [3] [4] [5] [6] [7] [8] [9]

These changes collectively make the project more flexible, easier to use in different environments, and more accessible for both end users and contributors.

@Beaulewis1977
Copy link

wow, looks great! i custom created this for my llm council and it works great: short ai written summary.

MCP Server & Direct Provider Support
I've just rolled out two major additions to the LLM Council to make it even more powerful for agent-based workflows:

Model Context Protocol (MCP) Server: You can now use the entire Council as a "tool" inside other agents (like Claude Desktop or Cursor). Use the consult_council tool to get a high-quality, synthesized answer derived from the 3-stage debate and peer-review process.
Direct Provider Integration: Added native support for OpenAI, Google (Gemini), and Anthropic APIs. You can now connect directly to these providers for lower latency and improved reliability (toggleable via
config.py
).

Key Highlights:

Agent-Ready: Your other agents can now tap into the council's "collective wisdom" with a single tool call.
Fast & Reliable: Direct API connections bypass middle-men for faster response times.
WSL2 Compatible: Includes a specific bridge config for Windows/WSL users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants