-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE]Ollama私有模型,用户希望也能支持对接图片理解模型。 #1843
Comments
用ollama或者xinference对接就行。 |
那就提需求支持图片模型,不要提单独哪个模型。 |
已更新 |
baixin513
changed the title
[FEATURE]私有模型,用户希望也能支持对接图片模型。
[FEATURE]Ollama私有模型,用户希望也能支持对接图片理解模型。
Dec 17, 2024
v1.9版本已发布。 |
Version v1.9 has been released. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
MaxKB 版本
v.1.9.0
请描述您的需求或者改进建议
今天有潜在客户提到想对接 LLaVA(Large Language and Vision Assistant),它是一个由威斯康星大学麦迪逊分校、微软研究院和哥伦比亚大学研究者共同发布的[多模态大模型]。
用户在Ollama上部署了该模型,希望把它作为图片模型配置到MaxKB中,目前私有模型这里没有“图片模型”这个选项。
请描述你建议的实现方案
客户是想用这个模型对接图片模型。
附加信息
No response
The text was updated successfully, but these errors were encountered: