-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DRAFT] Add OpenAI Client Error Handler #5615
base: main
Are you sure you want to change the base?
Conversation
@ekzhu , can you have a look and tag the relevant people for feedback? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (2)
python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py:420
- [nitpick] Wrapping the 'create' and 'create_stream' methods inside the init may lead to unexpected behavior if these methods are later overridden or extended. Consider applying the decorator at the method definition level so the error handler is consistently applied across all class usages.
# Wrap the methods in the error handler
python/packages/autogen-ext/tests/models/test_openai_model_client.py:1227
- [nitpick] The rate limiting test only verifies a single retry scenario with a fixed backoff delay. Consider adding a test case that triggers multiple consecutive retries to validate the exponential backoff logic more thoroughly.
@patch("asyncio.sleep", return_value=None)
Thanks for the draft PR! It presents a good starting point. For |
What I think may happen is that if the exception occurs in the middle of the stream, then the client will start streaming the response and then it may retry and there may be a reset in the stream without any clear warning (I'm not sure there is much we can do, maybe add a warning in the callback to make it clear that there was an exception?) I think the final processing of the CreateResult will flow as expected. |
For It's all too complicated for one PR. Perhaps we can address |
Why are these changes needed?
This PR introduces an error handler decorator on the openai client. The implementation allow users to specify error callbacks in case the client fails. The goal is to always provide a create result that the client can propagate to the agent to avoid application termination due to the exception.
There are default implementatios for the content safety error and a retry mechanism for transient errors. Still needs to check how the change will propagate to agentchat.
Related issue number
Checks