-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Based on the available documentation and issues, RAGFlow requires you to add models in the "Model providers" section and then set them as default in "System model settings" for both LLM (chat) and embedding purposes. However, there is no explicit documentation or issue confirming that you can select the same model (such as Moonshot) for both chat and embedding. The fact that the embedding model dropdown shows "No data" for Moonshot, even though it supports both features, suggests this may be a limitation or oversight in the current version of RAGFlow, rather than a confirmed bug or an intended restriction. If you want to set default models when running RAGFlow on Docker, you should update the If you need to use the same model for both chat and embedding and it does not appear as an option, this is likely a current limitation of the app. There is no documentation or issue confirming support for this scenario or describing it as a bug. If this is critical for your workflow, you may want to monitor future releases or updates for expanded model selection support. To reply, just mention my name @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
You're absolutely right—using the same model for both chat and embedding is a common requirement, especially when users want to streamline costs and management with a single provider.
Currently, RAGFlow does allow the same model to be set for multiple functions, but only if that model is explicitly supported for each function. For example, models like Azure-OpenAI, BaiChuan, BaiduYiyan, Bedrock, Cohere, and Gemini are supported for both chat and embedding, so you can select them for both roles in the UI or configuration files. However, the Moonshot model is only listed under chat models and not under embedding models in the codebase and documentation, which is why it does not appear as a…