Replies: 3 comments
-
for all VLMs which run on llama.cpp you might have luck with Ollama. CogVLM2 is unfortunately not supported: ollama/ollama#1930 |
Beta Was this translation helpful? Give feedback.
0 replies
-
TagGUI requires a graphical interface to run. You can rent a cloud server and install a desktop environment on it (example). |
Beta Was this translation helpful? Give feedback.
0 replies
-
thanks |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there, would like to thanks again @jhc13 for this amazing tools, really nice job.
As you know, some of the best LLM models like Cogvlm V2 require a lot of vram ( no 4 bit version )
I would like to know, for people for don't have the necessary resource's for run theses models, if they're is a way a use taggui into a cloud service who can support high vram gpu, heard that unfortunately create a runpod template was really hard for taggui.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions