Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: FunctionCallingProgram ignores tool_choice parameter #14415

Open
jordan-BAIC opened this issue Jun 26, 2024 · 3 comments
Open

[Bug]: FunctionCallingProgram ignores tool_choice parameter #14415

jordan-BAIC opened this issue Jun 26, 2024 · 3 comments
Labels
bug Something isn't working P2

Comments

@jordan-BAIC
Copy link

Bug Description

The FunctionCallingProgram class takes in a tool_choice parameter but does not use it.

The use varies slightly for OpenAI and Anthropic models:
https://platform.openai.com/docs/guides/function-calling
https://docs.anthropic.com/en/docs/build-with-claude/tool-use

Version

0.10.37

Steps to Reproduce


from pydantic import BaseModel, Field
from llama_index.core.program import FunctionCallingProgram
from llama_index.llms.anthropic import Anthropic
from pydantic import BaseModel

model = Anthropic("claude-3-5-sonnet-20240620")
    
class TestResponse(BaseModel):
    message: str = Field(description="The response message")

prompt = "Generate a song about {topic}. DO NOT USE ANY TOOLS"
program = FunctionCallingProgram.from_defaults(
    output_cls=TestResponse,
    prompt_template_str=prompt,
    verbose=True,
    llm=model,
    tool_choice={"type": "any"}
)
output = program(topic="dogs")

### Relevant Logs/Tracbacks

```shell
File ~/qqqqq/.venv/lib/python3.11/site-packages/llama_index/llms/anthropic/base.py:411, in Anthropic.get_tool_calls_from_response(self, response, error_on_no_tool_call, **kwargs)
    409 if len(tool_calls) < 1:
    410     if error_on_no_tool_call:
--> 411         raise ValueError(
    412             f"Expected at least one tool call, but got {len(tool_calls)} tool calls."
    413         )
    414     else:
    415         return []

ValueError: Expected at least one tool call, but got 0 tool calls.
@jordan-BAIC jordan-BAIC added bug Something isn't working triage Issue needs to be triaged/prioritized labels Jun 26, 2024
Copy link

dosubot bot commented Jun 26, 2024

Hey @jordan-BAIC! I'm here to help you with any bugs, questions, or contributions you have for the repository. Let's tackle this together!

To resolve the issue where the tool_choice parameter is ignored in the FunctionCallingProgram class, leading to a ValueError when no tool calls are made, you should ensure that the tool_choice parameter is properly processed using the resolve_tool_choice function. This function checks if the tool name exists in the provided tools and validates the tool_choice.

Here is the relevant code snippet for the resolve_tool_choice function:

from typing import Union, List, Optional

def resolve_tool_choice(
    tools: Optional[List[dict]] = None, tool_choice: str = "none"
) -> Union[str, dict]:
    """Resolve tool choice.

    Check if tool_name exists in tools.
    Note that unlike in OpenAI specification, 'auto' will ALWAYS choose the tool for you.
    Set to 'none' explicitly if do not wish to use tool.
    """
    valid_tool_choices = ["none", "auto"] + [t["function"]["name"] for t in tools or []]

    if tool_choice not in valid_tool_choices:
        raise ValueError(
            f"{tool_choice} is not a valid tool_choice. Must be one of {valid_tool_choices}"
        )

    return tool_choice

Make sure that the FunctionCallingProgram class uses this function to handle the tool_choice parameter correctly. For example:

class FunctionCallingProgram:
    def __init__(self, tools: Optional[List[dict]] = None, tool_choice: str = "none"):
        self.tool_choice = resolve_tool_choice(tools, tool_choice)
        # other initialization code

    def execute(self):
        if self.tool_choice == "none":
            raise ValueError("No tool calls made.")
        # other execution code

Additionally, ensure that the FunctionCallingProgram class properly integrates with the call_function and acall_function methods to handle tool calls. Here is an example of how these methods can be used:

from llama_index.legacy.tools import BaseTool, ToolOutput
from llama_index.legacy.llms.openai_utils import OpenAIToolCall
from llama_index.legacy.core.llms.types import ChatMessage, MessageRole

def call_function(
    tools: List[BaseTool],
    tool_call: OpenAIToolCall,
    verbose: bool = False,
) -> Tuple[ChatMessage, ToolOutput]:
    """Call a function and return the output as a string."""
    # validations to get passed mypy
    assert tool_call.id is not None
    assert tool_call.function is not None
    assert tool_call.function.name is not None
    assert tool_call.function.arguments is not None

    id_ = tool_call.id
    function_call = tool_call.function
    name = tool_call.function.name
    arguments_str = tool_call.function.arguments
    if verbose:
        print("=== Calling Function ===")
        print(f"Calling function: {name} with args: {arguments_str}")
    tool = get_function_by_name(tools, name)
    argument_dict = json.loads(arguments_str)

    # Call tool
    # Use default error message
    output = call_tool_with_error_handling(tool, argument_dict, error_message=None)
    if verbose:
        print(f"Got output: {output!s}")
        print("========================\n")
    return (
        ChatMessage(
            content=str(output),
            role=MessageRole.TOOL,
            additional_kwargs={
                "name": name,
                "tool_call_id": id_,
            },
        ),
        output,
    )

For asynchronous calls, you can use the acall_function method:

async def acall_function(
    tools: List[BaseTool], tool_call: OpenAIToolCall, verbose: bool = False
) -> Tuple[ChatMessage, ToolOutput]:
    """Call a function and return the output as a string."""
    # validations to get passed mypy
    assert tool_call.id is not None
    assert tool_call.function is not None
    assert tool_call.function.name is not None
    assert tool_call.function.arguments is not None

    id_ = tool_call.id
    function_call = tool_call.function
    name = tool_call.function.name
    arguments_str = tool_call.function.arguments
    if verbose:
        print("=== Calling Function ===")
        print(f"Calling function: {name} with args: {arguments_str}")
    tool = get_function_by_name(tools, name)
    async_tool = adapt_to_async_tool(tool)
    argument_dict = json.loads(arguments_str)
    output = await async_tool.acall(**argument_dict)
    if verbose:
        print(f"Got output: {output!s}")
        print("========================\n")
    return (
        ChatMessage(
            content=str(output),
            role=MessageRole.TOOL,
            additional_kwargs={
                "name": name,
                "tool_call_id": id_,
            },
        ),
        output,
    )

By using the resolve_tool_choice function and ensuring proper integration with the call_function and acall_function methods, you can ensure that the tool_choice parameter is properly interpreted, which should prevent the ValueError when no tool calls are made [1][2][3][4].

To continue talking to Dosu, mention @dosu.

@sabir012
Copy link

Hi, I also tackled with this issue. Getting "at least one tool call, but got 0 tool calls." while using FunctionCallingProgram for parsing response to pydantic object

@logan-markewich logan-markewich added P2 and removed triage Issue needs to be triaged/prioritized labels Jun 27, 2024
@houtaroy
Copy link
Contributor

houtaroy commented Jul 5, 2024

I'm not familiar with Claude API.
After I read the document, I think when you want "DO NOT USE ANY TOOLS", you should remove param tool_choice or set it {"type": "auto"}, like this:

program = FunctionCallingProgram.from_defaults(
    output_cls=TestResponse,
    prompt_template_str=prompt,
    verbose=True,
    llm=model,
    tool_choice={"type": "auto"}
)

I have tested it and it works, Is this correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working P2
Projects
Status: No status
Development

No branches or pull requests

4 participants