Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For autogenerated Router kwargs, specifying timeout of 60-sec #501

Merged
merged 1 commit into from
Sep 27, 2024

Conversation

jamesbraza
Copy link
Collaborator

I hit a loooong request to OpenAI (seeming 15 mins), and eventually hit a timeout:

Traceback (most recent call last):
  File "/path/to/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 944, in acompletion
    headers, response = await self.make_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 639, in make_openai_chat_completion_request
    raise e
  File "/path/to/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 627, in make_openai_chat_completion_request
    await openai_aclient.chat.completions.with_raw_response.create(
  File "/path/to/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 370, in wrapped
    return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1412, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/path/to/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1821, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1515, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/path/to/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1616, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': 'Timed out generating response. Please try again with a shorter prompt or with `max_tokens` set to a lower value.', 'type': 'internal_error', 'param': None, 'code': 'request_timeout'}}

It seems LiteLLM has a request timeout configurable at litellm.request_timeout, but it's unclear the full scope of that parameter: https://github.com/BerriAI/litellm/blob/v1.48.2/litellm/__init__.py#L271

This PR just specifies a Router.timeout of 60-sec for the default Router kwargs, hopefully this resolves the issue.

@jamesbraza jamesbraza added the bug Something isn't working label Sep 27, 2024
@jamesbraza jamesbraza self-assigned this Sep 27, 2024
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Sep 27, 2024
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Sep 27, 2024
@jamesbraza jamesbraza merged commit e39056b into main Sep 27, 2024
5 checks passed
@jamesbraza jamesbraza deleted the timeouts branch September 27, 2024 23:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working lgtm This PR has been approved by a maintainer size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants