You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, i want to know where i can use a faster whisper infer scirpt after seeing the PR faster whisper llm trt. i only want to transcribe using the whisper model, not whisper llm in sherpa triton. could you please direct me to the official script or documentation for the optimal and most accelerated version of the whisper model?
the log in whisper in Run with GPU (int8) doc is
decoding method: greedy_search
Elapsed seconds: 19.190 s
Real time factor (RTF): 19.190 / 6.625 = 2.897
The text was updated successfully, but these errors were encountered:
hi, i want to know where i can use a faster whisper infer scirpt after seeing the PR
faster whisper llm trt
. i only want to transcribe using the whisper model, not whisper llm insherpa triton
. could you please direct me to the official script or documentation for the optimal and most accelerated version of the whisper model?the log in whisper in
Run with GPU (int8)
doc isThe text was updated successfully, but these errors were encountered: