You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to align files that have two speakers to get phones as segments. If I have the audio file and a non-diarized transcript, as a .txt file in the corpus folder, the output TextGrid contains all the words in the .txt file and the corresponding phones.
If, however, I use the same audio file and a .TextGrid file with two tiers, one for each speaker, the output is a .TextGrid that is missing a lot of words. During alignment, the message WARNING There were 24 utterances ignored due to an issue in feature generation, see the log file for full details or run mfa validate on the corpus. is generated.
I have tried using --beam 400 --retry_beam 1000, to no avail. Are there better ways of making the aligner align all the words in the input file?
The text was updated successfully, but these errors were encountered:
I would double check that your tiers actually have text in them corresponding to the transcript? The log file should list out all utterances that were ignored, but it's either due to duration being very short or no text.
I am trying to align files that have two speakers to get phones as segments. If I have the audio file and a non-diarized transcript, as a .txt file in the corpus folder, the output TextGrid contains all the words in the .txt file and the corresponding phones.
If, however, I use the same audio file and a .TextGrid file with two tiers, one for each speaker, the output is a .TextGrid that is missing a lot of words. During alignment, the message
WARNING There were 24 utterances ignored due to an issue in feature generation, see the log file for full details or run mfa validate on the corpus.
is generated.I have tried using
--beam 400 --retry_beam 1000
, to no avail. Are there better ways of making the aligner align all the words in the input file?The text was updated successfully, but these errors were encountered: