You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue was already handled in this old PR but it had a commit mistake. A new PR will be made to have a cleaner history.
When loading ontologies into CurateGPT the insertion of the data into chromaDB is very often interrupted because of a server overload on the API side.
openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.
Implementing a exponential_backoff_request helped me to bypass this by trying again with an additional small sleep every time it would fail.
Its not a fancy solution but it can get the job done.
Another often occurring problem would be an HTTP 500 error, which could also be caught.
poetry run curategpt ontology index --index-fields label,definition,relationships -p stagedb -c ont_mp -m openai: sqlite:obo:mp
Configuration file exists at /Users/carlo/Library/Preferences/pypoetry, reusing this directory.
Consider moving TOML configuration files to /Users/carlo/Library/Application Support/pypoetry, as support for the legacy directory will be removed in an upcoming release.
WARNING:curate_gpt.store.chromadb_adapter:Cumulative length = 3040651, pausing ...
WARNING:curate_gpt.store.chromadb_adapter:Cumulative length = 3010451, pausing ...
ERROR:curate_gpt.store.chromadb_adapter:Failed to process batch after retries: The server is overloaded or not ready yet.
poetry run curategpt ontology index --index-fields label,definition,relationships -p stagedb -c ont_mondo -m openai: sqlite:obo:mondo
Configuration file exists at /Users/carlo/Library/Preferences/pypoetry, reusing this directory.
Consider moving TOML configuration files to /Users/carlo/Library/Application Support/pypoetry, as support for the legacy directory will be removed in an upcoming release.
ERROR:curate_gpt.store.chromadb_adapter:Failed to process batch after retries: The server had an error while processing your request. Sorry about that! {
"error": {
"message": "The server had an error while processing your request. Sorry about that!",
"type": "server_error",
"param": null,
"code": null
}
}
500 {'error': {'message': 'The server had an error while processing your request. Sorry about that!', 'type': 'server_error', 'param': None, 'code': None}} {'Date': 'Wed, 17 Jan 2024 11:49:08 GMT', 'Content-Type': 'application/json', 'Content-Length': '176', 'Connection': 'keep-alive', 'access-control-allow-origin': '*', 'openai-organization': 'lawrence-berkeley-national-laboratory-8', 'openai-processing-ms': '867', 'openai-version': '2020-10-01', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'x-ratelimit-limit-requests': '10000', 'x-ratelimit-limit-tokens': '10000000', 'x-ratelimit-remaining-requests': '9999', 'x-ratelimit-remaining-tokens': '9998545', 'x-ratelimit-reset-requests': '6ms', 'x-ratelimit-reset-tokens': '8ms', 'x-request-id': '1751b1c8c5e4386f901047e4380709fb', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '846e5f804ecd79c3-LHR', 'alt-svc': 'h3=":443"; ma=86400'}
The text was updated successfully, but these errors were encountered:
iQuxLE
added a commit
to iQuxLE/curate-gpt
that referenced
this issue
Jan 24, 2024
This issue was already handled in this old PR but it had a commit mistake. A new PR will be made to have a cleaner history.
When loading ontologies into CurateGPT the insertion of the data into chromaDB is very often interrupted because of a server overload on the API side.
openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.
Implementing a
exponential_backoff_request
helped me to bypass this by trying again with an additional small sleep every time it would fail.Its not a fancy solution but it can get the job done.
Another often occurring problem would be an
HTTP 500
error, which could also be caught.The text was updated successfully, but these errors were encountered: