-
Couldn't load subscription status.
- Fork 2.1k
fix(LiteLlm): add fallback user message to prevent empty content error with include_contents='none' #3255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(LiteLlm): add fallback user message to prevent empty content error with include_contents='none' #3255
Conversation
…messages empty due to include_contents='none'
Summary of ChangesHello @ishanrajsingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @ishanrajsingh, thank you for creating this PR! To help reviewers to review your PR more efficiently, could you please provide the followings:
Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an issue where the LiteLlm integration would error when include_contents='none' due to an empty content payload. The fix adds a fallback user message to ensure a non-empty content array is always sent to the LLM provider. I have identified a potential issue with the placement of the fallback logic that could lead to unexpected behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handle empty file content
|
GitHub Issue: Testing Plan: I verified the fix by running a scenario where an LlmAgent uses LiteLlm with include_contents='none'. The test confirms that the LLM request no longer sends empty content, preventing the BadRequestError from the LLM provider. Additionally, I ran existing LiteLlm unit tests to ensure no regressions occur. For thoroughness, a new test can be added simulating include_contents='none' to confirm fallback content insertion. |
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a critical issue where the LiteLlm integration fails when include_contents='none' due to an empty content payload. The fix introduces a fallback user message to ensure a non-empty content array is always sent to the LLM provider. I have added a review comment to suggest a minor improvement to the fallback message.
…messages empty due to include_contents='none'
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a critical issue where LiteLlm integration with include_contents='none' results in an empty content payload, causing a BadRequestError. The fix introduces a fallback user message when no messages are included, ensuring a non-empty content array is always sent. This review focuses on the correctness and maintainability of the implemented solution, ensuring it effectively prevents the error and adheres to best practices.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
Hi thank you for the contribution! Two comments
|
…content and add tests - Add fallback user message when messages list is empty due to include_contents='none' - Use same fallback text as BaseLlm._maybe_append_user_content for consistency - Add comprehensive tests covering empty contents scenarios with and without tools - Fixes google#3242
|
Thank you @GWeale for the feedback! I've updated the PR to address both points:
All tests pass successfully (6/6). The fix ensures that when |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
align fallback content with BaseLlm._maybe_append_user_content and add tests
|
Reverting the try/except around tool execution means we never call trace_tool_call when a tool fails or returns None. That drops the span data we use on for debugging. Please restore a failure path that traces the call (matching the existing ADK pattern) trace_tool_call now rejects None events and dereferences unconditionally. The tracing helper needs to keep the optional input and guard the attributes so we can still record failed or long‑running invocations. Deleted telemetry regression test Integration tests should import via the public google.adk surface, not src.google The LiteLlm tests are currently exercising mocks rather than real payloads because MagicMock.getitem isn’t wired correctly. Please use actual dicts (or set the mock’s return values properly) so the tests hit the serialization logic and stay within our testing conventions. |
|
Hi @GWeale , Thank you for your detailed feedback and patience. I apologize for the issues with the previous version of this PR. I've completely restructured the implementation and addressed all of your concerns. The PR now contains only the necessary changes with proper test coverage using real dict responses for accurate serialization testing. #3312 Thank you again for your guidance! |
This fix addresses an issue in the LiteLlm integration where setting include_contents='none' causes the content payload sent to the LLM provider to be empty, resulting in a BadRequestError. The patch adds a minimal fallback user message in the generate_content_async method when no messages are included, ensuring a non-empty content array is always sent. This prevents errors from providers that require non-empty input and improves compatibility when using LiteLlm with the Agent Development Kit.
GitHub Issue:
This PR addresses the bug reported in issue #3242 titled "LiteLlm + include_contents='none' results in empty content sent to the provider -> Error". The issue is already created and linked here: #3242
Testing Plan:
I verified the fix by running a scenario where an LlmAgent uses LiteLlm with include_contents='none'.
The test confirms that the LLM request no longer sends empty content, preventing the BadRequestError from the LLM provider.
Additionally, I ran existing LiteLlm unit tests to ensure no regressions occur.
For thoroughness, a new test can be added simulating include_contents='none' to confirm fallback content insertion.