-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Labels
models[Component] Issues related to model support[Component] Issues related to model support
Milestone
Description
Describe the bug
google.adk.models.LiteLLM.generate_content_async ignores the model specified by the llm_request: LLMRequest, always using self.model. this prevents scenarios where an agent callback decides to use a specific model, and updates the llm_request.model, for example.
To Reproduce
Steps to reproduce the behavior:
- declare an
LLMAgent, specifying aLiteLLMas the model, passing in a specific model name. ex:LLMAgent(..., model=LiteLLM(model="openai/gpt5"), ...) - in an agent callback, for example
before_model_callback, update thellm_request.modelto a different value. ex:llm_request.model = "openai/gpt-nano" - invoke the agent
Expected behavior
LiteLLM should use the model provided by the llm_request. instead, it is fixed to the model it was initialized with.
Model Information:
- Are you using LiteLLM: Yes
Related line of code:
adk-python/src/google/adk/models/lite_llm.py
Line 821 in 29968d4
| "model": self.model, |
Metadata
Metadata
Assignees
Labels
models[Component] Issues related to model support[Component] Issues related to model support