You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Agentchat now supports streaming of tokens via ModelClientStreamingChunkEvent . This PR is to track progress on supporting that in the AutoGen Studio UI.
What
Verify declarative support for streaming chunks in AGS
update backend to handle ModelClientStreamingChunkEvent (do not save it)
update fronted UI to appropriately display ModelClientStreamingChunkEvent
The text was updated successfully, but these errors were encountered:
frompydanticimportBaseModel# Define a tool that searches the web for information.asyncdefweb_search(query: str) ->str:
"""Find information on the web"""return"AutoGen is a programming framework for building multi-agent applications."classAgentResponse(BaseModel):
content: str# Create an agent that uses the OpenAI GPT-4o model.model_client=OpenAIChatCompletionClient(
model="gpt-4o",
response_format=AgentResponse,
)
streaming_assistant=AssistantAgent(
name="assistant",
model_client=model_client,
tools=[web_search],
system_message="Use tools to solve tasks.",
model_client_stream=True,
)
asyncformessageinstreaming_assistant.on_messages_stream( # type: ignore
[TextMessage(content="Find information on AutoGen", source="user")],
cancellation_token=CancellationToken(),
):
print(message)
TypeError: You tried to pass a BaseModel class to chat.completions.create(); You must use beta.chat.completions.parse() instead
Agentchat now supports streaming of tokens via
ModelClientStreamingChunkEvent
. This PR is to track progress on supporting that in the AutoGen Studio UI.What
ModelClientStreamingChunkEvent
(do not save it)ModelClientStreamingChunkEvent
The text was updated successfully, but these errors were encountered: