-
Notifications
You must be signed in to change notification settings - Fork 430
Add support for Anthropic Claude models #388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Ivan Vanderbyl <[email protected]>
Signed-off-by: Ivan Vanderbyl <[email protected]>
Signed-off-by: Ivan Vanderbyl <[email protected]>
…support Signed-off-by: Ivan Vanderbyl <[email protected]>
Added Anthropic Claude support
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello @ivanvanderbyl, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the capabilities of the ADK by integrating Anthropic Claude models. It provides developers with a powerful new option for building agents, supporting a wide array of advanced features such as multimodal input, sophisticated tool interactions, and flexible deployment through either Anthropic's native API or Google Cloud's Vertex AI. This integration aims to enhance the versatility and performance of agents developed using the ADK framework. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces comprehensive support for Anthropic Claude models, including both direct API and Vertex AI backends. The changes include a new model/anthropic package, a suite of internal converters for translating between ADK and Anthropic types, and several new example applications demonstrating various use cases. The implementation is well-structured and covers key features like streaming, tool calling, and multimodal inputs.
My review focuses on enhancing test coverage, improving code readability, and increasing robustness. I've identified a couple of unimplemented test cases that should be filled out to ensure correctness. Additionally, I've suggested a minor readability improvement in one of the examples and pointed out two high-severity issues where JSON unmarshalling errors are ignored, which could lead to silent failures in tool-use scenarios. Overall, this is a great addition, and addressing these points will make it even more solid.
| t.Run(tt.name, func(t *testing.T) { | ||
| // Import anthropic to use actual StopReason values | ||
| // For now, test is more of a documentation of expected behaviour | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case is currently unimplemented. To ensure the StopReasonToFinishReason function behaves as expected, this test should be fully implemented to call the function and assert its output against the expected values.
t.Run(tt.name, func(t *testing.T) {
got := converters.StopReasonToFinishReason(anthropic.StopReason(tt.stop))
if got != tt.want {
t.Errorf("StopReasonToFinishReason(%q) = %v, want %v", tt.stop, got, tt.want)
}
})Signed-off-by: Ivan Vanderbyl <[email protected]>
Previously, if FunctionResponse.ID was empty, the code would silently fall back to using FunctionResponse.Name as the tool_use_id. This breaks correlation when the same function is called multiple times in one turn, as all results would share the same ID. Now returns an error if ID is missing, ensuring proper tool call/result matching as required by the Anthropic API.
Replace manual env save/restore with t.Setenv() which automatically restores the original value when the test completes. Replace context.Background() with t.Context() which provides a context that's cancelled when the test ends.
|
This looks duplicated with PR #233. |
|
@git-hulk Thanks for flagging this. I hadn't seen your PR when I started working on this, so apologies for the overlap. I've been digging into the tool calling flow in your PR and noticed a few edge cases that might trip things up: Role constraints: Anthropic requires Message alternation: Related to the above, Anthropic needs strictly alternating user/assistant turns. Consecutive contents with the same role (pretty common after tool calls) need to be merged into a single message. Tool result content: The ToolChoice: Minor one: setting |
- Remove TestNewModel_DirectAPI (trivial getter test) - Remove TestGetVariant (just testing stdlib ParseBool) - Consolidate config tests into table-driven TestNewModel_ConfigBehavior - Consolidate Vertex missing config tests into TestNewModel_VertexAI_MissingConfig
|
@ivanvanderbyl Great thanks for your tests and confirmation.
I think this would be nice to have, since we cannot assume the upstream returns a wrong role.
Good suggestion, will imrpove it later.
For those two points, I just kept the same behaviors with adk-python. Might have a check when this will happen. |
This PR adds support for Anthropic Claude models as an alternative LLM provider in the ADK. The implementation supports both the direct Anthropic API and Anthropic models via Google Cloud Vertex AI.
Closes #225
Features
model/anthropicpackage implementing themodel.LLMinterfacegenai.PartwithThought=true)Vertex AI Integration
Models can be used via Vertex AI by setting
Variant: anthropic.VariantVertexAIor theANTHROPIC_USE_VERTEX=1environment variable.Testing Plan
examples/anthropic/)examples/workflowagents/*-anthropic/)Manual E2E Testing