Support for Local LLMs (e.g., Ollama) & Best Practices for Agents with Local Knowledge #44
-
Hi there, First of all, thanks for the great work on mcp-agent! I’m exploring the possibility of using local LLMs (e.g., Ollama or other on-device models) within the mcp-agent framework. Is it currently possible to integrate a local LLM like Ollama with mcp-agent? If so, what would be the best approach to set it up? Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi there @EddyDezuraud, thank you for raising these questions and for trying out mcp-agent! Please keep the feedback coming. Good news is that I believe everything you asked here is already possible with the library.
Yes, it is possible to use it with ollama. Please check out the Ollama example. Basically you can configure the openai settings with the ollama
Please take a look at the Streamlit RAG example which should help to get you started. Basically you can use a vector DB with an MCP server (such as Qdrant, though there are other servers for other DB's -- see servers repo. Initialize that collection, and define that server in your MCP requires a slight mindset shift since the framework doesn't need to be monolithic anymore -- the servers provide a separation of concerns where they are responsible for different things (memory, retrieval, etc.). Hope this helps! |
Beta Was this translation helpful? Give feedback.
Hi there @EddyDezuraud, thank you for raising these questions and for trying out mcp-agent! Please keep the feedback coming. Good news is that I believe everything you asked here is already possible with the library.
Yes, it is possible to use it with ollama. Please check out the Ollama example. Basically you can configure the openai settings with the ollama
base_url
, and it will work.