Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster whisper infer #1559

Open
didadida-r opened this issue Nov 21, 2024 · 0 comments
Open

faster whisper infer #1559

didadida-r opened this issue Nov 21, 2024 · 0 comments

Comments

@didadida-r
Copy link

hi, i want to know where i can use a faster whisper infer scirpt after seeing the PR faster whisper llm trt. i only want to transcribe using the whisper model, not whisper llm in sherpa triton. could you please direct me to the official script or documentation for the optimal and most accelerated version of the whisper model?

the log in whisper in Run with GPU (int8) doc is

decoding method: greedy_search
Elapsed seconds: 19.190 s
Real time factor (RTF): 19.190 / 6.625 = 2.897
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant