Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for locally running Ollama as an LLM backend #5

Open
appenz opened this issue Jun 4, 2024 · 0 comments
Open

Add support for locally running Ollama as an LLM backend #5

appenz opened this issue Jun 4, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@appenz
Copy link
Owner

appenz commented Jun 4, 2024

Right now this tool requires OpenAI to use. Instead the tool should use a locally running olllama server that it accesses via a local socket connection. Specifically:

  • Add ollama as an LLM backend
  • Add a command line option to select OpenAI or ollama
  • Add a menu toggle to switch between the two modes
@appenz appenz self-assigned this Jun 4, 2024
@appenz appenz added the enhancement New feature or request label Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant