Standalone executables of OpenAI's Whisper & Faster-Whisper for those who don't want to bother with Python.
Faster-Whisper executables are compatible with Windows 7 x64, Linux v5.4, Mac OS X v10.15 and above.
Meant to be used in command-line interface or Subtitle Edit.
Faster-Whisper is much faster & better than OpenAI's Whisper, and it requires less RAM/VRAM.
-
whisper-faster.exe "D:\videofile.mkv" --language=English --model=medium
-
whisper-faster.exe --help
Run your command-line interface as Administrator.
Don't copy programs to the Windows' folders!
Programs automatically will choose to work on GPU if CUDA is detected.
For decent transcription use not smaller than medium
model.
Guide how to run the command line programs: https://www.youtube.com/watch?v=A3nwRCV-bTU
Examples how to do batch processing on the multiple files: Purfview#29
Some defaults are tweaked for movies transcriptions and to make it portable.
Shows the progress bar in the title bar of command-line interface. [or it can be printed with -pp
]
By default it looks for models in the same folder, in path like this -> _models\faster-whisper-medium
.
Models are downloaded automatically or can be downloaded manually from: https://huggingface.co/guillaumekln
large
is mapped to large-v2
model.
beam_size=1
: can speed-up transcription twice. [ in my tests it had insignificant impact on accuracy ]
compute_type
: test different types to find fastest for your hardware. [--verbose=true
to see all supported types]
To reduce memory usage try --best_of=1
, or --temperature_increment_on_fallback=None
.