-
Notifications
You must be signed in to change notification settings - Fork 5.9k
feat: display Google Gemini cached token stats #6860
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
feat: display Google Gemini cached token stats #6860
Conversation
Extract cached token counts from Google's response metadata so they're visible in OpenCode's usage display. Gemini 2.5+ models use implicit caching (automatic, server-side). However, OpenCode wasn't reading the cached token counts from Google's metadata location (usageMetadata.cachedContentTokenCount) rather than the standard location. This enables users to see their Gemini cache hits in the session context usage display, and cost calculations will correctly account for cached tokens. Verified working: tested with gemini-3-flash-preview, observed cache.read values of 16K, 49K, and 107K tokens in a multi-turn conversation. Future opportunity: For guaranteed cache hits, explicit caching could be implemented using GoogleAICacheManager + providerOptions.google.cachedContent. See: https://ai.google.dev/gemini-api/docs/caching
23f5298 to
b974489
Compare
|
so the google ai sdk provider doesnt track it? |
|
Doesn't seem like it locally. Before and after of this change is that I can go back to sessions and actually see a cached token amount. Could be something upstream in Vercel's AI SDK? Transparently I mostly vibe coded here. I did try to be intentional though. Ultimately rolled back from my initial issue. I thought it wasn't caching at all. But really it's just not reporting/recording caching |
|
what provider are u using? google directly? any plugins? |
|
Google via apikey. Only plugin is a notifier I vibe coded. None of the antigravity or gemini cli auth provider plugins. |
|
Hmm I tested this several times and couldnt get any difference in token counting but it could just be happenstance.. |
|
So I don't think it will change your total token count in the TUI. But, this change will correctly identify cached tokens. So it impacts the price displayed in the TUI and also would have those cached token counts if someone went back and analyzed their sessions. Previously, Gemini was showing zero cached tokens (at least it was for me locally) when you went back and looked at old sessions. Was just never recording the implicit/automatic caching. |
|
Do you think this change will apply the fix to the Vertex AI connections, too? |
Iirc, the caching is by model not by provider generally. But there is a mix of both. If you use anthropic claude models via vertex I think current system will already cache correctly. But I didn't test this myself. If you use Gemini via vertex my assumption would be that server side implicit caching is working automatically (already). But it like suffers the same fate that this fixes where OpenCode doesn't record that. Not 100% sure. |
feat: display Google Gemini cached token stats
Closes #6851
What
One-line fix to read cached token counts from Google's metadata so they show up in session stats.
Why
Google returns cached token counts in a different spot than Anthropic:
usage.cachedInputTokensproviderMetadata.google.usageMetadata.cachedContentTokenCountImplicit caching was already working server-side (and saving money), we just weren't displaying it.
The fix
Tested
Verified locally with a couple of gemini conversations.
Future: Explicit Caching
Google has two caching modes:
For explicit caching, we'd need:
@google/generative-aidependencyGoogleAICacheManagerto create/update/delete caches with TTLproviderOptions.google.cachedContentThis is a bigger lift but would give guaranteed savings. See: