-
-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Google VertexAI provider chat #377
add Google VertexAI provider chat #377
Conversation
WalkthroughThe recent updates introduce Google Vertex AI support to the LangChain solution. This includes adding new projects for Google Vertex AI provider, implementing VertexAIChatModel, and configuring project dependencies. The changes enable seamless interaction with Google Cloud AI services and ensure comprehensive testing for this integration. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant LangChain
participant VertexAIProvider
participant GoogleCloudAI
User->>LangChain: Send Chat Request
LangChain->>VertexAIProvider: Initialize Provider
VertexAIProvider->>GoogleCloudAI: Request Chat Response
GoogleCloudAI->>VertexAIProvider: Return Chat Response
VertexAIProvider-->>LangChain: Forward Chat Response
LangChain-->>User: Deliver Chat Response
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Outside diff range, codebase verification and nitpick comments (1)
src/Providers/Google.VertexAI/test/VertexAITest.cs (1)
6-7
: Consider removing the[Explicit]
attribute.The
[Explicit]
attribute is used to mark tests that should not be run as part of the normal test suite. Ensure this is intentional.
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (9)
- LangChain.sln (3 hunks)
- src/Directory.Packages.props (1 hunks)
- src/Providers/Google.VertexAI/src/LangChain.Providers.Google.VertexAI.csproj (1 hunks)
- src/Providers/Google.VertexAI/src/Predefined/GeminiModels.cs (1 hunks)
- src/Providers/Google.VertexAI/src/VertexAIChatModel.cs (1 hunks)
- src/Providers/Google.VertexAI/src/VertexAIConfiguration.cs (1 hunks)
- src/Providers/Google.VertexAI/src/VertexAIProvider.cs (1 hunks)
- src/Providers/Google.VertexAI/test/LangChain.Providers.Google.VertexAI.Tests.csproj (1 hunks)
- src/Providers/Google.VertexAI/test/VertexAITest.cs (1 hunks)
Files skipped from review due to trivial changes (3)
- src/Directory.Packages.props
- src/Providers/Google.VertexAI/src/LangChain.Providers.Google.VertexAI.csproj
- src/Providers/Google.VertexAI/test/LangChain.Providers.Google.VertexAI.Tests.csproj
Additional comments not posted (12)
src/Providers/Google.VertexAI/src/Predefined/GeminiModels.cs (3)
3-4
: LGTM!The class
Gemini15ProModel
extendsVertexAIChatModel
and is initialized correctly with the model identifier "gemini-1.5-pro".
6-7
: LGTM!The class
Gemini15FlashModel
extendsVertexAIChatModel
and is initialized correctly with the model identifier "gemini-1.5-flash".
9-10
: LGTM!The class
Gemini1ProModel
extendsVertexAIChatModel
and is initialized correctly with the model identifier "gemini-1.0-pro".src/Providers/Google.VertexAI/src/VertexAIConfiguration.cs (1)
5-12
: LGTM!The
VertexAIConfiguration
class is well-defined with appropriate properties forLocation
,Publisher
,ProjectId
, andGenerationConfig
.src/Providers/Google.VertexAI/src/VertexAIProvider.cs (1)
6-17
: LGTM!The
VertexAIProvider
class is well-defined with properties forApi
andConfiguration
. The constructor correctly initializes these properties.src/Providers/Google.VertexAI/test/VertexAITest.cs (1)
14-14
: Ensure environment variable is set.The
GOOGLE_APPLICATION_CREDENTIALS
environment variable must be set with the path to the Google credentials JSON file for the test to run successfully.src/Providers/Google.VertexAI/src/VertexAIChatModel.cs (3)
7-12
: Constructor and property initialization look good.The constructor properly initializes the
Provider
property and includes a null-check.
59-66
: MethodConvertMessage
looks good.The method is straightforward and follows best practices.
68-77
: MethodConvertRole
looks good.The method is straightforward and follows best practices.
LangChain.sln (3)
363-364
: ProjectVertexAI
added successfully.The project
VertexAI
has been correctly added with the appropriate GUID.
365-366
: ProjectLangChain.Providers.Google.VertexAI
added successfully.The project
LangChain.Providers.Google.VertexAI
has been correctly added with the appropriate GUID.
367-368
: ProjectLangChain.Providers.Google.VertexAI.Tests
added successfully.The project
LangChain.Providers.Google.VertexAI.Tests
has been correctly added with the appropriate GUID.
private GenerateContentRequest ToPrompt(IEnumerable<Message> messages) | ||
{ | ||
var contents = new RepeatedField<Content>(); | ||
foreach (var message in messages) | ||
{ | ||
contents.Add(ConvertMessage(message)); | ||
} | ||
|
||
return new GenerateContentRequest | ||
{ | ||
Model = $"projects/{provider.Configuration.ProjectId}/locations/{provider.Configuration.Location}/publishers/{provider.Configuration.Publisher}/models/{Id}", | ||
Contents = { contents }, | ||
GenerationConfig = provider.Configuration.GenerationConfig | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify contents
initialization using LINQ.
The contents
variable declaration and initialization can be simplified using LINQ.
- var contents = new RepeatedField<Content>();
- foreach (var message in messages)
- {
- contents.Add(ConvertMessage(message));
- }
+ var contents = new RepeatedField<Content>(messages.Select(ConvertMessage));
Committable suggestion was skipped due to low confidence.
public override async Task<ChatResponse> GenerateAsync(ChatRequest request, | ||
ChatSettings? settings = null, | ||
CancellationToken cancellationToken = default) | ||
{ | ||
|
||
request = request ?? throw new ArgumentNullException(nameof(request)); | ||
var prompt = ToPrompt(request.Messages); | ||
|
||
var watch = Stopwatch.StartNew(); | ||
var response = await Provider.Api.GenerateContentAsync(prompt).ConfigureAwait(false); | ||
|
||
var result = request.Messages.ToList(); | ||
result.Add(response.Candidates[0].Content.Parts[0].Text.AsAiMessage()); | ||
|
||
var usage = Usage.Empty with | ||
{ | ||
Time = watch.Elapsed, | ||
InputTokens = response.UsageMetadata.PromptTokenCount, | ||
OutputTokens = response.UsageMetadata.CandidatesTokenCount | ||
}; | ||
|
||
return new ChatResponse | ||
{ | ||
Messages = result, | ||
Usage = usage, | ||
UsedSettings = ChatSettings.Default, | ||
}; | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use provided settings
parameter instead of hardcoding ChatSettings.Default
.
The UsedSettings
property of the ChatResponse
object is hardcoded to ChatSettings.Default
. It should use the settings
parameter if provided.
- UsedSettings = ChatSettings.Default,
+ UsedSettings = settings ?? ChatSettings.Default,
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
public override async Task<ChatResponse> GenerateAsync(ChatRequest request, | |
ChatSettings? settings = null, | |
CancellationToken cancellationToken = default) | |
{ | |
request = request ?? throw new ArgumentNullException(nameof(request)); | |
var prompt = ToPrompt(request.Messages); | |
var watch = Stopwatch.StartNew(); | |
var response = await Provider.Api.GenerateContentAsync(prompt).ConfigureAwait(false); | |
var result = request.Messages.ToList(); | |
result.Add(response.Candidates[0].Content.Parts[0].Text.AsAiMessage()); | |
var usage = Usage.Empty with | |
{ | |
Time = watch.Elapsed, | |
InputTokens = response.UsageMetadata.PromptTokenCount, | |
OutputTokens = response.UsageMetadata.CandidatesTokenCount | |
}; | |
return new ChatResponse | |
{ | |
Messages = result, | |
Usage = usage, | |
UsedSettings = ChatSettings.Default, | |
}; | |
} | |
public override async Task<ChatResponse> GenerateAsync(ChatRequest request, | |
ChatSettings? settings = null, | |
CancellationToken cancellationToken = default) | |
{ | |
request = request ?? throw new ArgumentNullException(nameof(request)); | |
var prompt = ToPrompt(request.Messages); | |
var watch = Stopwatch.StartNew(); | |
var response = await Provider.Api.GenerateContentAsync(prompt).ConfigureAwait(false); | |
var result = request.Messages.ToList(); | |
result.Add(response.Candidates[0].Content.Parts[0].Text.AsAiMessage()); | |
var usage = Usage.Empty with | |
{ | |
Time = watch.Elapsed, | |
InputTokens = response.UsageMetadata.PromptTokenCount, | |
OutputTokens = response.UsageMetadata.CandidatesTokenCount | |
}; | |
return new ChatResponse | |
{ | |
Messages = result, | |
Usage = usage, | |
UsedSettings = settings ?? ChatSettings.Default, | |
}; | |
} |
Remove unnecessary re-assignment of request
.
The request
parameter is re-assigned after the null-check, which is unnecessary.
- request = request ?? throw new ArgumentNullException(nameof(request));
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
public override async Task<ChatResponse> GenerateAsync(ChatRequest request, | |
ChatSettings? settings = null, | |
CancellationToken cancellationToken = default) | |
{ | |
request = request ?? throw new ArgumentNullException(nameof(request)); | |
var prompt = ToPrompt(request.Messages); | |
var watch = Stopwatch.StartNew(); | |
var response = await Provider.Api.GenerateContentAsync(prompt).ConfigureAwait(false); | |
var result = request.Messages.ToList(); | |
result.Add(response.Candidates[0].Content.Parts[0].Text.AsAiMessage()); | |
var usage = Usage.Empty with | |
{ | |
Time = watch.Elapsed, | |
InputTokens = response.UsageMetadata.PromptTokenCount, | |
OutputTokens = response.UsageMetadata.CandidatesTokenCount | |
}; | |
return new ChatResponse | |
{ | |
Messages = result, | |
Usage = usage, | |
UsedSettings = ChatSettings.Default, | |
}; | |
} | |
public override async Task<ChatResponse> GenerateAsync(ChatRequest request, | |
ChatSettings? settings = null, | |
CancellationToken cancellationToken = default) | |
{ | |
request = request ?? throw new ArgumentNullException(nameof(request)); | |
var prompt = ToPrompt(request.Messages); | |
var watch = Stopwatch.StartNew(); | |
var response = await Provider.Api.GenerateContentAsync(prompt).ConfigureAwait(false); | |
var result = request.Messages.ToList(); | |
result.Add(response.Candidates[0].Content.Parts[0].Text.AsAiMessage()); | |
var usage = Usage.Empty with | |
{ | |
Time = watch.Elapsed, | |
InputTokens = response.UsageMetadata.PromptTokenCount, | |
OutputTokens = response.UsageMetadata.CandidatesTokenCount | |
}; | |
return new ChatResponse | |
{ | |
Messages = result, | |
Usage = usage, | |
UsedSettings = ChatSettings.Default, | |
}; | |
} |
var provider = new VertexAIProvider(config); | ||
var model = new Gemini15ProModel(provider); | ||
|
||
string answer = await model.GenerateAsync("Generate some random name:"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error handling.
Add error handling to manage potential exceptions from the GenerateAsync
method.
- string answer = await model.GenerateAsync("Generate some random name:");
+ string answer;
+ try {
+ answer = await model.GenerateAsync("Generate some random name:");
+ } catch (Exception ex) {
+ Console.WriteLine($"Error generating response: {ex.Message}");
+ throw;
+ }
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
string answer = await model.GenerateAsync("Generate some random name:"); | |
string answer; | |
try { | |
answer = await model.GenerateAsync("Generate some random name:"); | |
} catch (Exception ex) { | |
Console.WriteLine($"Error generating response: {ex.Message}"); | |
throw; | |
} |
Summary by CodeRabbit
New Features
Gemini15ProModel
,Gemini15FlashModel
, andGemini1ProModel
.Tests