You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried 2 different datasets that was converted to Ljspeech dataset format, same approach has worked earlier
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
data = self.dataset[possibly_batched_index]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/lhotse/dataset/speech_synthesis.py", line 67, in getitem
audio, audio_lens = collate_audio(cuts)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/lhotse/dataset/collation.py", line 202, in collate_audio
audios = torch.stack(audios)
^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects each tensor to be equal size, but got [445852] at entry 0 and [445851] at entry 2
The text was updated successfully, but these errors were encountered:
Training VITS model
Training Recipe : egs/ljspeech/TTS
Tried 2 different datasets that was converted to Ljspeech dataset format, same approach has worked earlier
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
data = self.dataset[possibly_batched_index]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/lhotse/dataset/speech_synthesis.py", line 67, in getitem
audio, audio_lens = collate_audio(cuts)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/lhotse/dataset/collation.py", line 202, in collate_audio
audios = torch.stack(audios)
^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects each tensor to be equal size, but got [445852] at entry 0 and [445851] at entry 2
The text was updated successfully, but these errors were encountered: