Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Context limit does not work #3

Open
Konard opened this issue Feb 19, 2024 · 3 comments
Open

Context limit does not work #3

Konard opened this issue Feb 19, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@Konard
Copy link
Member

Konard commented Feb 19, 2024

2024-02-19 10:07:18,498 - __main__ - ERROR - OpenAI completion error: This model's maximum context length is 128000 tokens. However, your messages resulted in 130444 tokens. Please reduce the length of the messages.
2024-02-19 10:07:18,499 - __main__ - ERROR - This model's maximum context length is 128000 tokens. However, your messages resulted in 130444 tokens. Please reduce the length of the messages.
@Konard Konard added the bug Something isn't working label Feb 19, 2024
@FreePhoenix888
Copy link
Member

What is the reason?

russian-laws-bot/main.py

Lines 25 to 26 in 149a8f3

MAX_TOKENS = 128 * 1024
MAX_PROMPT_TOKENS = MAX_TOKENS * 0.8

russian-laws-bot/main.py

Lines 214 to 217 in 149a8f3

tokens_count = len(encoding.encode(prompt))
if tokens_count > MAX_PROMPT_TOKENS:
raise ValueError(f'{tokens_count} tokens in promt exceeds MAX_PROMPT_TOKENS ({MAX_PROMPT_TOKENS})')

@Konard
Copy link
Member Author

Konard commented Feb 27, 2024

@FreePhoenix888 calculation is wrong, because now only single message content tokens number is calculated. To fix that bug, we have to calculate tokens of the entire context (all messages that will be sent to GPT-4). It will require logic similar to what we have in our ChatGPT package in Deep. Or we can just ignore that issue if we will use actual ChatGPT deep package.

As a workaround of that and other issues, I now send only single message to GPT-4 API.

@FreePhoenix888
Copy link
Member

@FreePhoenix888 calculation is wrong, because now only single message content tokens number is calculated. To fix that bug, we have to calculate tokens of the entire context (all messages that will be sent to GPT-4). It will require logic similar to what we have in our ChatGPT package in Deep. Or we can just ignore that issue if we will use actual ChatGPT deep package.

As a workaround of that and other issues, I now send only single message to GPT-4 API.

Should we prefer chatgpt deep package ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants