Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support visualization of model streaming chunks in AutoGen Studio #5627

Open
Tracked by #4006
victordibia opened this issue Feb 20, 2025 · 3 comments · May be fixed by #5659
Open
Tracked by #4006

Support visualization of model streaming chunks in AutoGen Studio #5627

victordibia opened this issue Feb 20, 2025 · 3 comments · May be fixed by #5659
Assignees
Labels
proj-studio Related to AutoGen Studio.
Milestone

Comments

@victordibia
Copy link
Collaborator

victordibia commented Feb 20, 2025

Agentchat now supports streaming of tokens via ModelClientStreamingChunkEvent . This PR is to track progress on supporting that in the AutoGen Studio UI.

What

  • Verify declarative support for streaming chunks in AGS
  • update backend to handle ModelClientStreamingChunkEvent (do not save it)
  • update fronted UI to appropriately display ModelClientStreamingChunkEvent
@husseinmozannar
Copy link
Contributor

curious how to do this for tool calls or structured output?

@victordibia
Copy link
Collaborator Author

Good question.

  • Currently tool calls seem to not yield any chunk events. This makes sense.
  • I just tried with structured output and I seem to be seeing a bug.

Likely that @ekzhu is already aware of this.

from pydantic import BaseModel
# Define a tool that searches the web for information.
async def web_search(query: str) -> str:
    """Find information on the web"""
    return "AutoGen is a programming framework for building multi-agent applications."


class AgentResponse(BaseModel):
    content: str 


# Create an agent that uses the OpenAI GPT-4o model.
model_client = OpenAIChatCompletionClient(
    model="gpt-4o",
    response_format=AgentResponse, 
)
streaming_assistant = AssistantAgent(
    name="assistant",
    model_client=model_client,
    tools=[web_search],
    system_message="Use tools to solve tasks.",
    model_client_stream=True,
)
 
async for message in streaming_assistant.on_messages_stream(  # type: ignore
     [TextMessage(content="Find information on AutoGen", source="user")],
    cancellation_token=CancellationToken(),
):
    print(message)

TypeError: You tried to pass a BaseModel class to chat.completions.create(); You must use beta.chat.completions.parse() instead

@jackgerrits
Copy link
Member

I believe that is this bug: #5568

@victordibia victordibia linked a pull request Feb 22, 2025 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proj-studio Related to AutoGen Studio.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants