LLM not Function Call #3417
Replies: 5 comments
-
|
If it helps, here is the documentation for Gemini describing how function calls work: |
Beta Was this translation helpful? Give feedback.
-
|
Thank you very much for your reply. I would like to know how the tool is called in ADK if the model used does not have the function call function.
Because I used a model without function call in ADK, but the agent can still call the tool, which really shocked me~so I want to understand the principle.
chaoke
***@***.***
原始邮件
发件人:Pavel Luchkov ***@***.***>
发件时间:2025年7月17日 19:31
收件人:google/adk-python ***@***.***>
抄送:chaoke ***@***.***>, Author ***@***.***>
主题:Re: [google/adk-python] LLM not Function Call (Issue #2008)
PaveLuchkov left a comment (google/adk-python#2008)
If it helps, here is the documentation for Gemini describing how function calls work:
https://ai.google.dev/gemini-api/docs/function-calling?example=meeting
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: Your intuition is correct! For models that do not have a native function calling feature, ADK relies on a form of "intent recognition." It prompts the model to generate a structured text response (like JSON) that signals its intent to call a tool, which ADK then parses and executes. Hi @Farewell-CK, That's an excellent question! You are right that this is achieved through a process that can be described as LLM intent recognition. The Asset Definition Kit (ADK) is designed to be model-agnostic, allowing it to work with a wide variety of Large Language Models (LLMs). While some models, like Gemini, have built-in support for function calling, many others do not. For models that lack this native capability, ADK uses a clever workaround. Instead of relying on a special function-calling feature, ADK instructs the LLM to format its response in a specific, structured way (usually JSON) when it determines that a tool should be used. This structured response contains the name of the tool to be called and the arguments to pass to it. [1] The ADK framework then intercepts this specially formatted text, parses it to understand the tool-use request, and then executes the corresponding tool with the provided arguments. [2] This is why, for example, when using ADK with open or local models via wrappers like I hope this clarifies how ADK achieves its flexibility with different models! [1] https://google.github.io/adk-docs/agents/models/ |
Beta Was this translation helpful? Give feedback.
-
|
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You are correct! For models that do not have native function calling capabilities, ADK uses a form of intent recognition driven by a specific prompting strategy to enable tool usage. Hello! That's an excellent question. While ADK is optimized to work with models that have built-in function calling or tool-use features, it also includes a clever mechanism for models that lack this capability. For these models, ADK employs a specific prompt-based strategy. The prompt instructs the model to respond with a structured format, such as JSON, that describes the tool it wants to call and the arguments to use. The ADK framework can then parse this structured text, identify the intended tool, and execute it with the provided arguments. This effectively simulates a native tool call. You can see an example of this prompting technique in the documentation for using open and local models with ADK. [1] This approach allows developers to leverage the power of tools across a wider range of models, even those without native function calling support. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I would like to ask, when using adk, the model I am using does not have the function call function, but it still calls the tool. Is this achieved through LLM intent recognition?
Beta Was this translation helpful? Give feedback.
All reactions