You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, when upon asking question using API, Open LLM pull the text that is relevant to answer the question. But sometimes, if the text matches too closely with what we already have, we end up sending more words to GPT than it can handle, and that throws an error saying we've exceeded the token limit.
Describe the solution you'd like
Limiting the number of tokens sent to GPT so that API succeed and answer is generated
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Currently, when upon asking question using API, Open LLM pull the text that is relevant to answer the question. But sometimes, if the text matches too closely with what we already have, we end up sending more words to GPT than it can handle, and that throws an error saying we've exceeded the token limit.
Describe the solution you'd like
Limiting the number of tokens sent to GPT so that API succeed and answer is generated
The text was updated successfully, but these errors were encountered: