Skip to content

Conversation

@Sameerlite
Copy link
Collaborator

@Sameerlite Sameerlite commented Oct 15, 2025

Title

Add Vertex AI Batch Passthrough Support with Cost Tracking

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

Configuration Example:

model_list:
  - model_name: gemini-1.5-flash
    litellm_params:
      model: vertex_ai/gemini-1.5-flash
      vertex_project: your-project-id
      vertex_location: us-central1
      vertex_credentials: path/to/service-account.json
image image

lcfyi and others added 30 commits October 5, 2025 03:36
This fixes Claude's models via the Converse API, which should also fix
Claude Code.
* fix(router): update model_name_to_deployment_indices on deployment removal

When a deployment is deleted, the model_name_to_deployment_indices map
was not being updated, causing stale index references. This could lead
to incorrect routing behavior when deployments with the same model_name
were dynamically removed.

Changes:
- Update _update_deployment_indices_after_removal to maintain
  model_name_to_deployment_indices mapping
- Remove deleted indices and decrement indices greater than removed index
- Clean up empty entries when no deployments remain for a model name
- Update test to verify proper index shifting and cleanup behavior

* fix(router): remove redundant index building during initialization

Remove duplicate index building operations that were causing unnecessary
work during router initialization:

1. Removed redundant `_build_model_id_to_deployment_index_map` call in
   __init__ - `set_model_list` already builds all indices from scratch

2. Removed redundant `_build_model_name_index` call at end of
   `set_model_list` - the index is already built incrementally via
   `_create_deployment` -> `_add_model_to_list_and_index_map`

Both indices (model_id_to_deployment_index_map and
model_name_to_deployment_indices) are properly maintained as lookup
indexes through existing helper methods. This change eliminates O(N)
duplicate work during initialization without any behavioral changes.

The indices continue to be correctly synchronized with model_list on
all operations (add/remove/upsert).
* fix(router): update model_name_to_deployment_indices on deployment removal

When a deployment is deleted, the model_name_to_deployment_indices map
was not being updated, causing stale index references. This could lead
to incorrect routing behavior when deployments with the same model_name
were dynamically removed.

Changes:
- Update _update_deployment_indices_after_removal to maintain
  model_name_to_deployment_indices mapping
- Remove deleted indices and decrement indices greater than removed index
- Clean up empty entries when no deployments remain for a model name
- Update test to verify proper index shifting and cleanup behavior

* fix(router): remove redundant index building during initialization

Remove duplicate index building operations that were causing unnecessary
work during router initialization:

1. Removed redundant `_build_model_id_to_deployment_index_map` call in
   __init__ - `set_model_list` already builds all indices from scratch

2. Removed redundant `_build_model_name_index` call at end of
   `set_model_list` - the index is already built incrementally via
   `_create_deployment` -> `_add_model_to_list_and_index_map`

Both indices (model_id_to_deployment_index_map and
model_name_to_deployment_indices) are properly maintained as lookup
indexes through existing helper methods. This change eliminates O(N)
duplicate work during initialization without any behavioral changes.

The indices continue to be correctly synchronized with model_list on
all operations (add/remove/upsert).
Add tiered pricing and cost calculation for xai
…_block_repair

Add support for thinking blocks and redacted thinking blocks in Anthropic v1/messages API
(feat) Add voyage model integration in sagemaker
Add support for extended thinking in Anthropic's models via Bedrock's Converse API
* docs: fix doc

* docs(index.md): bump rc

* [Fix] GEMINI - CLI -  add google_routes to llm_api_routes (#15500)

* fix: add google_routes to llm_api_routes

* test: test_virtual_key_llm_api_routes_allows_google_routes

* build: bump version

* bump: version 1.78.0 → 1.78.1

* add application level encryption in SQS

* add application level encryption in SQS

---------

Co-authored-by: Krrish Dholakia <[email protected]>
Co-authored-by: Ishaan Jaff <[email protected]>
Co-authored-by: deepanshu <[email protected]>
@vercel
Copy link

vercel bot commented Oct 15, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Oct 17, 2025 3:23am

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing tests man

- if not, return False
- if so, return True
"""
from litellm_enterprise.proxy.hooks.managed_files import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't change this please

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverted the changed

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing tests man -

model_response = ModelResponse()

# Create a minimal logging object with required attributes
class MockLoggingObj:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please create a real LiteLLM_Logging_object()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the code

@Sameerlite
Copy link
Collaborator Author

missing tests man -

added


s/o @[Darien Kindlund](https://www.linkedin.com/in/kindlund/) for this tutorial

## Batch Passthrough
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refactor into separate page and add a table indicating cost tracking is supported - https://docs.litellm.ai/docs/generateContent

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@krrishdholakia
Copy link
Contributor

@Sameerlite your pr fails mapped tests. can you look into this?

Screenshot 2025-10-17 at 1 22 23 PM

@CLAassistant
Copy link

CLAassistant commented Oct 18, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
5 out of 6 committers have signed the CLA.

✅ lcfyi
✅ ishaan-jaff
✅ AlexsanderHamir
✅ deepanshululla
✅ Sameerlite
❌ LoadingZhang
You have signed the CLA already but the status is still pending? Let us recheck it.

@Sameerlite Sameerlite changed the base branch from litellm_staging_oct to litellm_sameer_oct_staging October 20, 2025 18:18
@Sameerlite
Copy link
Collaborator Author

Closing this due to many conflicts - #15744

@Sameerlite Sameerlite closed this Oct 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants