-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
程序未报错,但是wer结果100% #2
Comments
@busishengui Did you ever resolve this issue? Having similar issues on a different dataset. |
@mitchelldehaven No,I have not solved this problem. Which dataset do you use? Is it WSJ? |
This issue will happen when your network gets stuck at a local optimal that tends to predict silence at each point. You can tune your network more carefully or introduce some curriculum learning methods such as train from short to long unterances. I've heard people report similar issues on Librispeech and then got it solved by training on short utterances first and going to longer ones afterwards. |
@YiwenShaoStephen Thank you very much for your reply. I'll give it a try. |
@YiwenShaoStephen You deleted Chainloss function in loss.py,but what is the new loss function? |
@YiwenShaoStephen I have tried the three methods you told me, but no matter on the training set or the verification set, the loss function does not converge, and still can not identify the correct results. Do you have any other way? |
In dataset.py ,a notation about variable graph is 'if self.train: # only training data has fst (graph)',which means valid and test do not need have variable graph,but in the train.py,the valid mode defines loss = criterion(outputs, output_lengths, graphs),when I use valid data ,the error is "raise Exception("An empty graph encountered!") |
@cocowf The training/valid graphs are generated by composing the transcription with denominator.fst. However, the denominator.fst is estimated on the training data only so you would probably have empty numerator fst when you compose the validation/test transcript with denominator.fst. |
Thanks Yiwen,there is another question. you mean is that valid/test do not have attribution sample['graph'],but in loss function ,criterion(outputs, output_lengths, graphs).when we skip empty graph in valid/test,how does valid loss work?
|
@cocowf By skipping, it means you will skip this utterance (with an empty graph) when you form a minibatch so that all the utterance in that minibatch will have a non-empty graph. |
by skipping the utterance with empty graph, it means minibatch is non-empty graph? |
Yes, all the utterance within the minibatch will have non-empty graphs. |
"An empty graph encountered!" occured before skipping the empty graph ,because of graph = ChainGraph(fst, log_domain=True),raise Exception in pychain/graph.py. |
Oh yes, that's due to the changes introduced in pychain code for its usage in Espresso. You can refer to this thread: #5 and temporarily comment out this line in pychain: https://github.com/YiwenShaoStephen/pychain/blob/master/pychain/graph.py#L69 |
I was doubt ,before skipping why a little valid set has non-empty graph.but all empty graph |
Did you ever use this pychain in different dataset ,such as mandarin,how did files related ro language model generate? |
我采用miniLibrispeech作为训练和测试语音数据集进行测试,使用example中的tdnn作为训练模型,整个run的流程并未报错,但是最终wer结果为100%。
$WER 100.00%[20138/20138,0 ins, 20138 del, 0sub]
The text was updated successfully, but these errors were encountered: