Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issues with ContextNet and FastSCNN models #10

Open
ashra-main opened this issue Apr 4, 2020 · 3 comments
Open

issues with ContextNet and FastSCNN models #10

ashra-main opened this issue Apr 4, 2020 · 3 comments

Comments

@ashra-main
Copy link

ashra-main commented Apr 4, 2020

I trained 7 of the models and in all of them, I got more than %80 validation mIoU with the default settings (CamVid dataset after 1000 epochs). But when I tested the 1000th checkpoints with test.py code, I get these mIoUs: CGNet_0.651, ContextNet_0.060, DABNet_0.652, EDANet_0.288, ENet_0.590, ERFNet_0.672, FastSCNN_0.011.

So I'm wondering why there's a significant difference between validation and test scores?
and why ContextNet and FastSCNN checkpoints are not trained?

I have Python 3.7 and Pytorch 1.4
hope we can solve the testing issues soon

@ashra-main ashra-main changed the title possible issue with testing code issues with evaluation and ContextNet and FastSCNN models Apr 5, 2020
@ashra-main ashra-main changed the title issues with evaluation and ContextNet and FastSCNN models issues with ContextNet and FastSCNN models Apr 5, 2020
@ashra-main
Copy link
Author

I figured out why there's a discrepancy between validation and training scores. the default value of train_type parameter is trainval, so it's actually training on the validation set as well. so after 1000 epochs, it was overfishing. I changed train_type to train to behave normally (I'd suggest using this as default behavior). However, still, the behavior of ContextNet and FastSCNN is not justified.

@ashra-main
Copy link
Author

when I put these two models in eval() mode, the outputs become nonsense, but they are looking ok in train() mode. perhaps there's something wrong with the regularization methods!

@weizhou1001
Copy link

weizhou1001 commented Apr 27, 2020

Have you figured out the problem? I found the similar issues. I used 'train' for train_type. The training IoU and loss looked good, but when I used test.py, the results were very bad for ENet and bad for FSSNet. But the test.py is working fine for LEDNet. Not sure if there's any setting missing in the setup?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants