Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAIChatCompletionClient's create_stream is not compatible with structured output #5568

Open
chengyu-liu-cs opened this issue Feb 16, 2025 · 3 comments · May be fixed by #5671
Open

OpenAIChatCompletionClient's create_stream is not compatible with structured output #5568

chengyu-liu-cs opened this issue Feb 16, 2025 · 3 comments · May be fixed by #5671

Comments

@chengyu-liu-cs
Copy link

chengyu-liu-cs commented Feb 16, 2025

What happened?

I was testing the example in the link @ekzhu provided in the issue. Instead of testing the example as it is, I was testing with model_client_stream=True. There was an error below.

"""
autogen/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1786, in validate_response_format raise TypeError( TypeError: You tried to pass a BaseModelclass tochat.completions.create(); You must use beta.chat.completions.parse() instead
"""

What did you expect to happen?

It should support model_client_stream=True

How can we reproduce it (as minimally and precisely as possible)?

from pydantic import BaseModel
from typing import Literal
# The response format for the agent as a Pydantic base model.
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]

model_client = OpenAIChatCompletionClient(
    model="gemini-2.0-flash-exp",
    base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
    api_key="AIzaSyCnEtytHlZlF1Z0J7-Gx3scxNX8xaw-9Qc",
    model_info={
        "vision": True,
        "function_calling": True,
        "json_output": True,
        "family": "unknown",
    },
    response_format=AgentResponse,
)
agent = AssistantAgent(
    name="Agent",
    model_client=model_client,
    tools=tools,
    #system_message= "You are an assistant with some tools that can be used to answer some questions",
    system_message="Categorize the input as happy, sad, or neutral following the JSON format.",
    model_client_stream=True,
)
async for message in agent.on_messages_stream([TextMessage(content="I am happy.", source="user")], CancellationToken()):
    print(f"{message.}", end=" ")

AutoGen version

0.4.7

Which package was this bug in

AgentChat

Model used

No response

Python version

No response

Operating system

No response

Any additional info you think would be helpful for fixing this bug

No response

@ekzhu ekzhu added this to the 0.4.8-python milestone Feb 18, 2025
@ekzhu ekzhu changed the title TypeError( TypeError: You tried to pass a BaseModelclass tochat.completions.create(); You must use beta.chat.completions.parse() instead OpenAIChatCompletionClient's create_stream is not compatible with structured output Feb 19, 2025
@ekzhu
Copy link
Collaborator

ekzhu commented Feb 19, 2025

The create_stream method:

https://github.com/microsoft/autogen/blob/main/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py#L647-L648

Should be updated to use parse() instead of create() when there is a BaseModel in the response_format field. See the implementation of create for reference:

https://github.com/microsoft/autogen/blob/main/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py#L476-L531

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 19, 2025

@chengyu-liu-cs let us know if you are interested in contribute.

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 23, 2025

@chengyu-liu-cs A patch is ready #5671 please test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants