You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to propose adding an estimated token count to the generated output. This would help users know if their generated text fits within their LLM's context window limits.
Proposed Feature:
Add an estimated token count at the beginning of both llms.txt and llms-full.txt files
Display format could be something like: Estimated Tokens: 12,345
Why this would be useful:
Helps users immediately know if the generated text will fit their LLM's context window
Prevents trial-and-error when loading large text files into LLMs
Makes it easier to split content into appropriate chunk sizes if needed
Implementation Suggestions:
Could use libraries like tiktoken or a simple character-based approximation
Token count could be placed in a header section or metadata block at the start of the file
The text was updated successfully, but these errors were encountered:
I'd like to propose adding an estimated token count to the generated output. This would help users know if their generated text fits within their LLM's context window limits.
Proposed Feature:
llms.txt
andllms-full.txt
filesEstimated Tokens: 12,345
Why this would be useful:
Implementation Suggestions:
tiktoken
or a simple character-based approximationThe text was updated successfully, but these errors were encountered: