You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 13, 2022. It is now read-only.
# Note: TorchScript doesn't allow to unpack tensors as tuples
sequence_idx=supervision_segments[idx, 0].item()
start_frame=supervision_segments[idx, 1].item()
num_frames=supervision_segments[idx, 2].item()
lengths[sequence_idx] =start_frame+num_frames
When --concatenate-cuts=True, several utterances may be concatenated into one sequence.
So lengths[sequence_idx] may correspond to multiple utterances. Later utterances will OVERWRITE
the value of lengths[sequence_idx] set by earlier utterances if the sequence with sequence_id contains
at least two utterances.
The text was updated successfully, but these errors were encountered:
I found this bug while writing tests for encoder_padding_mask. Liyong and I disabled --concatenate-cuts during training,
so it is not a problem for us.
It happens only when
--concatenate-cuts=True
.See the problematic code below (line 692):
snowfall/snowfall/models/transformer.py
Lines 687 to 692 in 3502531
When
--concatenate-cuts=True
, several utterances may be concatenated into one sequence.So
lengths[sequence_idx]
may correspond to multiple utterances. Later utterances will OVERWRITEthe value of
lengths[sequence_idx]
set by earlier utterances if the sequence withsequence_id
containsat least two utterances.
The text was updated successfully, but these errors were encountered: