You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe this project has huge potential to have an option to be fully local and private. For example adding LM Studio integration to allow any downloaded LLM or VLM like Phi-3 Vision. And integrating xVASynth or XTTS(Local TTS).
The text was updated successfully, but these errors were encountered:
Thanks for the consideration as most of these UIs are actually using llama.cpp just like ollama. LM Studio's UI though is very user friendly and has access to huggingface URL for any gguf model and quant variants, LM also has a local server feature that your project could connect to, for example they use: http://localhost:1234/v1
I believe this project has huge potential to have an option to be fully local and private. For example adding LM Studio integration to allow any downloaded LLM or VLM like Phi-3 Vision. And integrating xVASynth or XTTS(Local TTS).
The text was updated successfully, but these errors were encountered: