Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom model provider agent does not use tool calling when an output type is specified #332

Open
anakin-05 opened this issue Mar 25, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@anakin-05
Copy link

Please read this first

  • Have you read the custom model provider docs, including the 'Common issues' section? Model provider docs
  • Have you searched for related issues? Others may have faced similar issues.

Yes, I have reviewed all the relevant documentation.

Describe the question

When using a custom model provider, the agent generates a response without calling a tool if an output type is specified (1st run). However, when I remove the output type, the model correctly calls the intended tool (2nd run). I also attempted to enforce tool usage by setting tool_use_behavior to 'required' and specifying a named tool (3rd run), but this did not resolve the issue.

Image

With the OpenAI model, the agent behaves as expected.

Image

Debug information

  • Agents SDK version: v0.0.6
  • Python version: 3.12.9
  • Model provider: Ollama
  • Model name: qwq:32b (supports tool calling)

Repro steps

from __future__ import annotations

import asyncio
import os
from dataclasses import dataclass

import mlflow
from openai import AsyncOpenAI

from agents import (
    Agent,
    Model,
    ModelProvider,
    OpenAIChatCompletionsModel,
    RunConfig,
    Runner,
    function_tool,
    set_tracing_disabled,
)

BASE_URL = os.getenv("EXAMPLE_BASE_URL")
API_KEY = os.getenv("EXAMPLE_API_KEY")
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME")

if not BASE_URL or not API_KEY or not MODEL_NAME:
    raise ValueError(
        "Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
    )

client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)


class CustomModelProvider(ModelProvider):
    def get_model(self, model_name: str | None) -> Model:
        return OpenAIChatCompletionsModel(
            model=model_name or MODEL_NAME, openai_client=client
        )


CUSTOM_MODEL_PROVIDER = CustomModelProvider()


@dataclass
class Weather:
    city: str
    date: str
    weather: str
    temperature: int | None = None


@function_tool
def get_weather(city: str, date: str):
    print(f"[debug] getting weather for {city} on {date}")
    return f"The weather in {city} on {date} is sunny with a high of 75°F."


async def main():
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        tools=[get_weather],
        output_type=Weather,
        # tool_use_behavior="required",
        # tool_use_behavior="get_weather",
    )

    # This will use the custom model provider
    result = await Runner.run(
        agent,
        "What's the weather in Tokyo on September 3rd 2021?",
        run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
    )
    print(result.final_output)


if __name__ == "__main__":
    asyncio.run(main())

Expected behavior

The agent should call the intended tool and generate a response in the specified output format, just as it does with the OpenAI model.

@anakin-05 anakin-05 added the bug Something isn't working label Mar 25, 2025
@rm-openai
Copy link
Collaborator

@anakin-05 unfortunately this looks like an issue with the custom model you're using? The SDK correctly passes through the params for tool choice, but the model is ignoring it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants