We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am performing a large number of transcriptions on limited GPU space.
I would like to cancel the model forwarding as soon as I know that I won't need the result. Is it possible to do this with faster-whisper?
If not, can you help guide me to where I'm Ctranslate2 I would need to modify to add this feature?
Thank you for your great work.
The text was updated successfully, but these errors were encountered:
you can use multithreading, one thread per transcription using the same model instance, and kill the thread when you don't need it anymore
Sorry, something went wrong.
No branches or pull requests
I am performing a large number of transcriptions on limited GPU space.
I would like to cancel the model forwarding as soon as I know that I won't need the result. Is it possible to do this with faster-whisper?
If not, can you help guide me to where I'm Ctranslate2 I would need to modify to add this feature?
Thank you for your great work.
The text was updated successfully, but these errors were encountered: