-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: FunctionCallingProgram ignores tool_choice parameter #14415
Comments
Hey @jordan-BAIC! I'm here to help you with any bugs, questions, or contributions you have for the repository. Let's tackle this together! To resolve the issue where the Here is the relevant code snippet for the from typing import Union, List, Optional
def resolve_tool_choice(
tools: Optional[List[dict]] = None, tool_choice: str = "none"
) -> Union[str, dict]:
"""Resolve tool choice.
Check if tool_name exists in tools.
Note that unlike in OpenAI specification, 'auto' will ALWAYS choose the tool for you.
Set to 'none' explicitly if do not wish to use tool.
"""
valid_tool_choices = ["none", "auto"] + [t["function"]["name"] for t in tools or []]
if tool_choice not in valid_tool_choices:
raise ValueError(
f"{tool_choice} is not a valid tool_choice. Must be one of {valid_tool_choices}"
)
return tool_choice Make sure that the class FunctionCallingProgram:
def __init__(self, tools: Optional[List[dict]] = None, tool_choice: str = "none"):
self.tool_choice = resolve_tool_choice(tools, tool_choice)
# other initialization code
def execute(self):
if self.tool_choice == "none":
raise ValueError("No tool calls made.")
# other execution code Additionally, ensure that the from llama_index.legacy.tools import BaseTool, ToolOutput
from llama_index.legacy.llms.openai_utils import OpenAIToolCall
from llama_index.legacy.core.llms.types import ChatMessage, MessageRole
def call_function(
tools: List[BaseTool],
tool_call: OpenAIToolCall,
verbose: bool = False,
) -> Tuple[ChatMessage, ToolOutput]:
"""Call a function and return the output as a string."""
# validations to get passed mypy
assert tool_call.id is not None
assert tool_call.function is not None
assert tool_call.function.name is not None
assert tool_call.function.arguments is not None
id_ = tool_call.id
function_call = tool_call.function
name = tool_call.function.name
arguments_str = tool_call.function.arguments
if verbose:
print("=== Calling Function ===")
print(f"Calling function: {name} with args: {arguments_str}")
tool = get_function_by_name(tools, name)
argument_dict = json.loads(arguments_str)
# Call tool
# Use default error message
output = call_tool_with_error_handling(tool, argument_dict, error_message=None)
if verbose:
print(f"Got output: {output!s}")
print("========================\n")
return (
ChatMessage(
content=str(output),
role=MessageRole.TOOL,
additional_kwargs={
"name": name,
"tool_call_id": id_,
},
),
output,
) For asynchronous calls, you can use the async def acall_function(
tools: List[BaseTool], tool_call: OpenAIToolCall, verbose: bool = False
) -> Tuple[ChatMessage, ToolOutput]:
"""Call a function and return the output as a string."""
# validations to get passed mypy
assert tool_call.id is not None
assert tool_call.function is not None
assert tool_call.function.name is not None
assert tool_call.function.arguments is not None
id_ = tool_call.id
function_call = tool_call.function
name = tool_call.function.name
arguments_str = tool_call.function.arguments
if verbose:
print("=== Calling Function ===")
print(f"Calling function: {name} with args: {arguments_str}")
tool = get_function_by_name(tools, name)
async_tool = adapt_to_async_tool(tool)
argument_dict = json.loads(arguments_str)
output = await async_tool.acall(**argument_dict)
if verbose:
print(f"Got output: {output!s}")
print("========================\n")
return (
ChatMessage(
content=str(output),
role=MessageRole.TOOL,
additional_kwargs={
"name": name,
"tool_call_id": id_,
},
),
output,
) By using the |
Hi, I also tackled with this issue. Getting "at least one tool call, but got 0 tool calls." while using FunctionCallingProgram for parsing response to pydantic object |
I'm not familiar with Claude API. program = FunctionCallingProgram.from_defaults(
output_cls=TestResponse,
prompt_template_str=prompt,
verbose=True,
llm=model,
tool_choice={"type": "auto"}
) I have tested it and it works, Is this correct? |
Bug Description
The FunctionCallingProgram class takes in a tool_choice parameter but does not use it.
The use varies slightly for OpenAI and Anthropic models:
https://platform.openai.com/docs/guides/function-calling
https://docs.anthropic.com/en/docs/build-with-claude/tool-use
Version
0.10.37
Steps to Reproduce
The text was updated successfully, but these errors were encountered: