-
-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(GoogleGemini): Added CountTokens #331
Conversation
fix: fixed usages
WalkthroughThe recent updates primarily focus on refining functionality and improving accuracy in token counting, usage tracking, and model selection for Google and OpenAI providers. Key changes include updating the Google_GenerativeAI package version, enhancing token counting in GoogleChatModel, adjusting usage tracking in both GoogleChatModel and OpenAiChatModel, and updating the model used in Gemini15ProModel. Changes
Sequence Diagram(s) (Beta)sequenceDiagram
participant User
participant GoogleChatModel
participant GoogleAPI
participant OpenAiChatModel
User ->> GoogleChatModel: CountTokens(messages)
GoogleChatModel ->> GoogleAPI: Send messages for token count
GoogleAPI -->> GoogleChatModel: Return token count
GoogleChatModel -->> User: Return token count
User ->> GoogleChatModel: SendMessage
GoogleChatModel ->> GoogleAPI: Process message
GoogleAPI -->> GoogleChatModel: Return processed message
GoogleChatModel ->> GoogleChatModel: Update usage tracking
GoogleChatModel -->> User: Return processed message
User ->> OpenAiChatModel: SendMessage
OpenAiChatModel ->> OpenAIAPI: Process message
OpenAIAPI -->> OpenAiChatModel: Return processed message
OpenAiChatModel ->> OpenAiChatModel: Update usage2 tracking
OpenAiChatModel ->> OpenAiChatModel: Add usage2 to usage
OpenAiChatModel -->> User: Return processed message
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (5)
- src/Directory.Packages.props (1 hunks)
- src/Providers/Google/src/GoogleChatModel.Tokens.cs (1 hunks)
- src/Providers/Google/src/GoogleChatModel.cs (3 hunks)
- src/Providers/Google/src/Predefined/GeminiModels.cs (1 hunks)
- src/Providers/OpenAI/src/Chat/OpenAiChatModel.cs (1 hunks)
Files skipped from review due to trivial changes (1)
- src/Directory.Packages.props
Additional comments not posted (4)
src/Providers/Google/src/GoogleChatModel.Tokens.cs (2)
12-14
: LGTM! The method now acts as a wrapper to maintain backward compatibility.
17-22
: LGTM! Proper use ofConfigureAwait(false)
to avoid deadlocks in library code.src/Providers/Google/src/Predefined/GeminiModels.cs (1)
27-27
: Model upgrade inGemini15ProModel
looks good. Please verify the impact on resource usage due to increased parameters.src/Providers/OpenAI/src/Chat/OpenAiChatModel.cs (1)
269-275
: Renaming of usage variable tousage2
inGenerateAsync
method is correctly implemented. Please verify integration with the billing system to ensure accurate tracking.
var usage = Usage.Empty; | ||
|
||
if (usedSettings.UseStreaming == true) | ||
{ | ||
var message = await StreamCompletionAsync(messages, cancellationToken).ConfigureAwait(false); | ||
messages.Add(message); | ||
usage += Usage.Empty with | ||
{ | ||
Time = watch.Elapsed | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes to usage tracking in GenerateAsync
method are correctly implemented. Consider adding comments to explain the logic behind usage calculations for future maintainability.
+ // Initialize usage tracking
+ // Update usage with elapsed time
+ // Add usage to provider
Also applies to: 153-160, 193-199
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
var usage = Usage.Empty; | |
if (usedSettings.UseStreaming == true) | |
{ | |
var message = await StreamCompletionAsync(messages, cancellationToken).ConfigureAwait(false); | |
messages.Add(message); | |
usage += Usage.Empty with | |
{ | |
Time = watch.Elapsed | |
}; | |
var usage = Usage.Empty; | |
// Initialize usage tracking | |
if (usedSettings.UseStreaming == true) | |
{ | |
var message = await StreamCompletionAsync(messages, cancellationToken).ConfigureAwait(false); | |
messages.Add(message); | |
usage += Usage.Empty with | |
{ | |
// Update usage with elapsed time | |
Time = watch.Elapsed | |
}; |
feat(GoogleGemini): Added CountTokens
fix: fixed usages to correctly return the request usages.
Summary by CodeRabbit
New Features
Enhancements
Gemini15ProModel
to useGemini15Pro
for improved performance.GoogleChatModel
andOpenAiChatModel
for more accurate time and resource management.Dependencies
Google_GenerativeAI
package from version1.0.0
to1.0.1
for better stability and new features.