You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be good to know the tokens required to send a query and/or the tokens used for a query.
Some ideas:
If a flag (confirmtokens) is set to true, after entering an input the number of tokens will be displayed and a further confirmation - like enter - is required to move on.
If a flag (showtokens) is set to true, the token usage / cost of an input will be displayed with the output such as [Tokens used: XX Est. Cost $0.0002]
The values should adjust based on which model is selected
Good idea! Token usage is a significant problem for online API call, especially for those expensive models (e.x $0.12 / 1K tokens for gpt-4-32k). I'd like to work on the showtokens feature first, and for the confirmtokens, maybe setting a money threshold to trigger the confirm would be better.
A low priority idea.
There is a slight delay between the use of tokens and when it shows up on the usage page: https://platform.openai.com/account/usage
It would be good to know the tokens required to send a query and/or the tokens used for a query.
Some ideas:
OpenAI do provide some information on the way token usage should be estimated, or can be called from the API:
https://platform.openai.com/docs/guides/chat/introduction#:~:text=it%27s%20more%20difficult%20to%20count%20how%20many%20tokens
Can't immediately see pricing information via the API, however they are published here: https://openai.com/pricing
The text was updated successfully, but these errors were encountered: