Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Add tokens counter #41

Open
Angelchev opened this issue Dec 29, 2023 · 0 comments · May be fixed by #47
Open

Feat: Add tokens counter #41

Angelchev opened this issue Dec 29, 2023 · 0 comments · May be fixed by #47
Assignees
Labels
enhancement New feature or request

Comments

@Angelchev
Copy link
Member

Overview

It would be useful to add the ability for neural to get the token count for some given input. This would help prevent initiating requests that accidentally go over the maximum token count for some given model source.

This will also be useful in situations where we want to extract the maximum possible response from a model via request_token_num = model_max_token_len - context_tokens_len

Implementation

  • The tokenizer should be appropriate for the respective model
  • We should use an open-source (Ideally MIT) tokenizer that we can bundle to not require installing additional dependencies
@Angelchev Angelchev added the enhancement New feature or request label Dec 29, 2023
@Angelchev Angelchev self-assigned this Jan 3, 2024
@Angelchev Angelchev linked a pull request Jan 3, 2024 that will close this issue
16 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant