You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
according to table, the batch_size is 3. But what is the segment of each waveform during training? (Is it 3 sec, in this condition what is batch_size?)
according to default batch_size in train.py code that is 128 , which value is batch_size ?
Thanks a lot.
The text was updated successfully, but these errors were encountered:
That's also my question.Via reading the origin paper, wavform is shaped into the type of 4s and 8khz. And they prepare 10 hours train-set and 5 hours test-set. That's so large.If we use batchsize 3,it means that the training will cost us 3000 times literation for one epoch ,which is too slow.By the way , the paper author seems to train the model for 30 hours.
according to table, the batch_size is 3. But what is the segment of each waveform during training? (Is it 3 sec, in this condition what is batch_size?)
according to default batch_size in train.py code that is 128 , which value is batch_size ?
Thanks a lot.
The text was updated successfully, but these errors were encountered: