You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looks like magentic already supports running local LLMs using MAGENTIC_LITELLM_MODEL which can be a local server run by Ollama. Is there any plan to support user to use LLMs running locally on their device instead of only OpenAI's model?
The text was updated successfully, but these errors were encountered:
This is definitely something we are interested in exploring, but will lilkely not be for a little bit of time. It will be interesting to see how the function calling compares.
This is definitely open to be explored by the community as well.
Looks like magentic already supports running local LLMs using
MAGENTIC_LITELLM_MODEL
which can be a local server run by Ollama. Is there any plan to support user to use LLMs running locally on their device instead of only OpenAI's model?The text was updated successfully, but these errors were encountered: