Replies: 1 comment
-
Perhaps this is a discussion for whisperX , but still I'm curious to know the behaviour of the model when
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm new to ML and I don't have a lot of experience running models. I'm looking for opinions on whether I'm doing something wrong
I loaded a faster-whisper model with num_workers=1 on a GPU. Then I started transcribing multiple audio files in parallel from different threads, all sharing the same model instance.
I now realize this doesn’t improve performance much. But aside from performance, can this approach cause problems?
I'm receiving the following error on some audio files which otherwise do not show up if I run transcription for it in isolation on them
Beta Was this translation helpful? Give feedback.
All reactions