Model Blending (Mixture-of-Agents + LLM Routers) #505
TomLucidor
started this conversation in
Ideas
Replies: 1 comment
-
Ties in with semantic caching and having a request proxy sit in front of all requests. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
There are three approaches to mixing multiple models (with different sources and architectures):
Key considerations would be:
Beta Was this translation helpful? Give feedback.
All reactions