You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Every request by default has an upper bound of 1000 token usage. Due to this, the AI model returns responses which extra-long responses for easy prompts. Extra content or multiple answers are added for prompts for which the user wouldn't expect long responses.
Solution:
Add three flags indicating the expected response length - short, medium, and long. Users can use these three flags according to his/her expectations of the content length.
The text was updated successfully, but these errors were encountered:
Every request by default has an upper bound of 1000 token usage. Due to this, the AI model returns responses which extra-long responses for easy prompts. Extra content or multiple answers are added for prompts for which the user wouldn't expect long responses.
Solution:
Add three flags indicating the expected response length - short, medium, and long. Users can use these three flags according to his/her expectations of the content length.
The text was updated successfully, but these errors were encountered: