You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you read the custom model provider docs, including the 'Common issues' section?Model provider docs
Have you searched for related issues? Others may have faced similar issues.
Yes, I have reviewed all the relevant documentation.
Describe the question
When using a custom model provider, the agent generates a response without calling a tool if an output type is specified (1st run). However, when I remove the output type, the model correctly calls the intended tool (2nd run). I also attempted to enforce tool usage by setting tool_use_behavior to 'required' and specifying a named tool (3rd run), but this did not resolve the issue.
With the OpenAI model, the agent behaves as expected.
Debug information
Agents SDK version: v0.0.6
Python version: 3.12.9
Model provider: Ollama
Model name: qwq:32b (supports tool calling)
Repro steps
from __future__ importannotationsimportasyncioimportosfromdataclassesimportdataclassimportmlflowfromopenaiimportAsyncOpenAIfromagentsimport (
Agent,
Model,
ModelProvider,
OpenAIChatCompletionsModel,
RunConfig,
Runner,
function_tool,
set_tracing_disabled,
)
BASE_URL=os.getenv("EXAMPLE_BASE_URL")
API_KEY=os.getenv("EXAMPLE_API_KEY")
MODEL_NAME=os.getenv("EXAMPLE_MODEL_NAME")
ifnotBASE_URLornotAPI_KEYornotMODEL_NAME:
raiseValueError(
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
)
client=AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
classCustomModelProvider(ModelProvider):
defget_model(self, model_name: str|None) ->Model:
returnOpenAIChatCompletionsModel(
model=model_nameorMODEL_NAME, openai_client=client
)
CUSTOM_MODEL_PROVIDER=CustomModelProvider()
@dataclassclassWeather:
city: strdate: strweather: strtemperature: int|None=None@function_tooldefget_weather(city: str, date: str):
print(f"[debug] getting weather for {city} on {date}")
returnf"The weather in {city} on {date} is sunny with a high of 75°F."asyncdefmain():
agent=Agent(
name="Assistant",
instructions="You are a helpful assistant.",
tools=[get_weather],
output_type=Weather,
# tool_use_behavior="required",# tool_use_behavior="get_weather",
)
# This will use the custom model providerresult=awaitRunner.run(
agent,
"What's the weather in Tokyo on September 3rd 2021?",
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
)
print(result.final_output)
if__name__=="__main__":
asyncio.run(main())
Expected behavior
The agent should call the intended tool and generate a response in the specified output format, just as it does with the OpenAI model.
The text was updated successfully, but these errors were encountered:
@anakin-05 unfortunately this looks like an issue with the custom model you're using? The SDK correctly passes through the params for tool choice, but the model is ignoring it.
Please read this first
Yes, I have reviewed all the relevant documentation.
Describe the question
When using a custom model provider, the agent generates a response without calling a tool if an output type is specified (1st run). However, when I remove the output type, the model correctly calls the intended tool (2nd run). I also attempted to enforce tool usage by setting
tool_use_behavior
to 'required' and specifying a named tool (3rd run), but this did not resolve the issue.With the OpenAI model, the agent behaves as expected.
Debug information
v0.0.6
3.12.9
Repro steps
Expected behavior
The agent should call the intended tool and generate a response in the specified output format, just as it does with the OpenAI model.
The text was updated successfully, but these errors were encountered: