Skip to content

Conversation

@ivanvanderbyl
Copy link

This PR adds support for Anthropic Claude models as an alternative LLM provider in the ADK. The implementation supports both the direct Anthropic API and Anthropic models via Google Cloud Vertex AI.

Closes #225

Features

  • New model/anthropic package implementing the model.LLM interface
  • Support for all current Claude models (Opus 4.x, Sonnet 4.x, Haiku 4.5, etc.)
  • Streaming and non-streaming response handling
  • Tool/function calling with proper schema conversion
  • Extended thinking support (mapped to genai.Part with Thought=true)
  • Multimodal inputs (text, images, PDF documents)
  • System instructions support
  • Citations and web search tool handling

Vertex AI Integration

Models can be used via Vertex AI by setting Variant: anthropic.VariantVertexAI or the ANTHROPIC_USE_VERTEX=1 environment variable.

Testing Plan

  • Added comprehensive unit tests
  • Added example applications demonstrating various workflow patterns:
    • Basic usage (examples/anthropic/)
    • Sequential, parallel, and loop workflow agents (examples/workflowagents/*-anthropic/)

Manual E2E Testing

godotenv -f .env go run ./examples/anthropic/
godotenv -f .env go run ./examples/workflowagents/sequential-anthropic/

@google-cla
Copy link

google-cla bot commented Dec 3, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ivanvanderbyl, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the capabilities of the ADK by integrating Anthropic Claude models. It provides developers with a powerful new option for building agents, supporting a wide array of advanced features such as multimodal input, sophisticated tool interactions, and flexible deployment through either Anthropic's native API or Google Cloud's Vertex AI. This integration aims to enhance the versatility and performance of agents developed using the ADK framework.

Highlights

  • Anthropic Claude Integration: Adds comprehensive support for Anthropic Claude models as an alternative LLM provider within the ADK, enabling developers to leverage Claude's capabilities.
  • Flexible Deployment Options: Supports both direct Anthropic API access and integration via Google Cloud Vertex AI, allowing configuration through code or environment variables for versatile deployment.
  • Rich Feature Set: Includes streaming and non-streaming responses, robust tool/function calling with proper schema conversion, extended thinking support (mapped to genai.Part with Thought=true), multimodal inputs (text, images, PDFs), system instructions, and handling of citations and web search tools.
  • New Examples and Tests: Introduces a new model/anthropic package with extensive unit tests and example applications demonstrating basic usage, as well as sequential, parallel, and loop workflow agents powered by Anthropic models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces comprehensive support for Anthropic Claude models, including both direct API and Vertex AI backends. The changes include a new model/anthropic package, a suite of internal converters for translating between ADK and Anthropic types, and several new example applications demonstrating various use cases. The implementation is well-structured and covers key features like streaming, tool calling, and multimodal inputs.

My review focuses on enhancing test coverage, improving code readability, and increasing robustness. I've identified a couple of unimplemented test cases that should be filled out to ensure correctness. Additionally, I've suggested a minor readability improvement in one of the examples and pointed out two high-severity issues where JSON unmarshalling errors are ignored, which could lead to silent failures in tool-use scenarios. Overall, this is a great addition, and addressing these points will make it even more solid.

Comment on lines 156 to 159
t.Run(tt.name, func(t *testing.T) {
// Import anthropic to use actual StopReason values
// For now, test is more of a documentation of expected behaviour
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test case is currently unimplemented. To ensure the StopReasonToFinishReason function behaves as expected, this test should be fully implemented to call the function and assert its output against the expected values.

			t.Run(tt.name, func(t *testing.T) {
				got := converters.StopReasonToFinishReason(anthropic.StopReason(tt.stop))
				if got != tt.want {
					t.Errorf("StopReasonToFinishReason(%q) = %v, want %v", tt.stop, got, tt.want)
				}
			})

Signed-off-by: Ivan Vanderbyl <[email protected]>
Previously, if FunctionResponse.ID was empty, the code would silently
fall back to using FunctionResponse.Name as the tool_use_id. This breaks
correlation when the same function is called multiple times in one turn,
as all results would share the same ID.

Now returns an error if ID is missing, ensuring proper tool call/result
matching as required by the Anthropic API.
Replace manual env save/restore with t.Setenv() which automatically
restores the original value when the test completes.

Replace context.Background() with t.Context() which provides a context
that's cancelled when the test ends.
@git-hulk
Copy link
Contributor

git-hulk commented Dec 4, 2025

This looks duplicated with PR #233.

@ivanvanderbyl
Copy link
Author

ivanvanderbyl commented Dec 4, 2025

@git-hulk Thanks for flagging this. I hadn't seen your PR when I started working on this, so apologies for the overlap.

I've been digging into the tool calling flow in your PR and noticed a few edge cases that might trip things up:

Role constraints: Anthropic requires tool_use blocks in assistant messages and tool_result blocks in user messages. Right now the role comes straight from content.Role, so if upstream has the wrong role you'll hit API validation errors. Might be worth checking for FunctionCall/FunctionResponse parts and overriding the role accordingly.

Message alternation: Related to the above, Anthropic needs strictly alternating user/assistant turns. Consecutive contents with the same role (pretty common after tool calls) need to be merged into a single message.

Tool result content: The stringifyFunctionResponse heuristic that picks result or output fields can drop data unexpectedly. JSON-marshalling the whole Response map would be safer. Also worth validating that FunctionResponse.ID is present since Anthropic needs it to correlate results back to the originating tool_use.

ToolChoice: Minor one: setting ToolChoice to "auto" when tools are present might override user intent if they wanted different behaviour.

ivanvanderbyl and others added 2 commits December 5, 2025 09:54
- Remove TestNewModel_DirectAPI (trivial getter test)
- Remove TestGetVariant (just testing stdlib ParseBool)
- Consolidate config tests into table-driven TestNewModel_ConfigBehavior
- Consolidate Vertex missing config tests into TestNewModel_VertexAI_MissingConfig
@git-hulk
Copy link
Contributor

git-hulk commented Dec 5, 2025

@ivanvanderbyl Great thanks for your tests and confirmation.

Role constraints: Anthropic requires tool_use blocks in assistant messages and tool_result blocks in user messages. Right now the role comes straight from content.Role, so if upstream has the wrong role you'll hit API validation errors. Might be worth checking for FunctionCall/FunctionResponse parts and overriding the role accordingly.

I think this would be nice to have, since we cannot assume the upstream returns a wrong role.

Tool result content: The stringifyFunctionResponse heuristic that picks result or output fields can drop data unexpectedly. JSON-marshalling the whole Response map would be safer. Also worth validating that FunctionResponse.ID is present since Anthropic needs it to correlate results back to the originating tool_use.

Good suggestion, will imrpove it later.

Message alternation: Related to the above, Anthropic needs strictly alternating user/assistant turns. Consecutive contents with the same role (pretty common after tool calls) need to be merged into a single message.

ToolChoice: Minor one: setting ToolChoice to "auto" when tools are present might override user intent if they wanted different behaviour.

For those two points, I just kept the same behaviors with adk-python. Might have a check when this will happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

When is the plan to support the Claude model?

2 participants