-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI Tutor will forget it's prompt after 8k tokens #26
Comments
Hey, I opened the issues to ask this question! |
Does plugins get access to api responses like this : https://github.com/ysymyth/tree-of-thought-llm/blob/faa28c395e5b86bfcbf983355810d52f54fb7b51/models.py#L35, so that we can accurately count the number of tokens spent so far. |
Based on the content of this Tutor, We totally rely on the prompt way to make communication, rather than API way, So I guess the result is |
Does plugin INPUT count into the token count? Someone could setup a plugin where we can essentially input both the user prompt and GPT-4 output and the plugin can spit out using an external web server the number of tokens that were given to it. |
Here's another idea that could be implemented: |
From a prompt perspective, I think this approach is executable, |
👉 You can preserve Mr. Ranedeer entirely using Fastlane as a prompt manager on top of GPT-4: https://builder.fastlane.is Basically, you can create Mr. Ranedeer as a persona on there, add the prompt, and organize it amongst the history or other prompts you might want to test out. Then it'll never forget its base prompts, but other |
Thanks for sharing, will try later. |
Now that code interpreter is widespread, I think memory handling will become a lot easier. |
So, How to combine code interpreter with Mr.Raneedeer, Any idea? |
Here's how I approach it in v2.7
If you want to prevent Mr. Ranedeer from repeating the output, the trick I use is to convert whatever Mr. Ranedeer wrote into base64 and output it. Surprisingly, GPT-4 doesn't output the base64.
|
Half-Closed #72 - Code Interpreter is better at prompt retention. Keeping this open to gather more feedback on how v2.7 performs |
How do you know about that @JushBJJ ? ChatGPT history is only 4K tokens. You can confirm it yourself. |
GPT-4 is 8k tokens, GPT-4 with Interpeter feels like a different beast that has a higher context/better context retention strategy. Additionally, I suspect that OpenAI has appended the initial message into the system prompt allowing permanent recall of the original prompt intended. |
From: #22
Due to the large prompt size, it is easy for Mr. Ranedeer to forget all of its prompt eventually after using it for a long time. When developing v2.4.16, I tried to implement commands
/count
and/refresh
to do this but it was hard trying to get GPT-4 to stay consistent at rewriting the entire prompt and also counting all previous inputs and outputs of the convo. I also found that GPT-4 was pretty accurate at estimating how many tokens are in one message when given the right context on roughly how many tokens is one word.Has anyone come up with a prompt to estimate the number of all tokens in the chat history and also somewhat consistently rewrite its own prompt after a certain number of tokens are used?
The text was updated successfully, but these errors were encountered: