You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to benchmark faster-whisper and some pipeline whisper implementations of whisper in huggingface.
For the sake of fairness I would like to parametrize the models as equally as possible.
In HF you have different generation possibilities which are:
greedy decoding if num_beams=1 and do_sample=False contrastive search if penalty_alpha>0 and top_k>1 multinomial sampling if num_beams=1 and do_sample=True beam-search decoding if num_beams>1 and do_sample=False beam-search multinomial sampling if num_beams>1 and do_sample=True diverse beam-search decoding if num_beams>1 and num_beam_groups>1
How would I for example reproduce greedy decoding in faster-whisper? Is there a do_sample parameter?
Should I set best_of = 1 and beam_size = 1? Also in case I set do_sample = True in HF would that be
equal to setting best_of = 5? Maybe you can share some insights with me, best case I want to reproduce all of the above strategies.
Best regards
The text was updated successfully, but these errors were encountered:
I'll send an invite to the repo if he wants to help out or just kibitz. Like @MahmoudAshraf97 I've been inundated with other stuff but do plan to get back to the benchmarking in the very near future.
I want to benchmark faster-whisper and some pipeline whisper implementations of whisper in huggingface.
For the sake of fairness I would like to parametrize the models as equally as possible.
In HF you have different generation possibilities which are:
How would I for example reproduce greedy decoding in faster-whisper? Is there a do_sample parameter?
Should I set
best_of = 1
andbeam_size = 1
? Also in case I setdo_sample = True
in HF would that beequal to setting
best_of = 5
? Maybe you can share some insights with me, best case I want to reproduce all of the above strategies.Best regards
The text was updated successfully, but these errors were encountered: