Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss starts out with 0.000 #1

Open
kettenfett opened this issue Mar 5, 2018 · 5 comments
Open

loss starts out with 0.000 #1

kettenfett opened this issue Mar 5, 2018 · 5 comments

Comments

@kettenfett
Copy link

Hello, I'm a beginner with python/pytorch. I just reconfigured the Customized-DataLoader for my data. I created test_img.txt, test_label.txt, ... and ran it. my data has only 2 classes.
The Loss starts out with 0.000. Is there something wrong? Do you have any idea, what can cause this?

Thanks, for any help.

@jiangqy
Copy link
Owner

jiangqy commented Mar 6, 2018

what's your loss function and net structure? pls provide more details or code.

@kettenfett
Copy link
Author

kettenfett commented Mar 6, 2018

Hello,

I did not change the loss function or the net structure.
I now cloned it and added my changes to the code and the new images. https://github.com/philippHRO/Customized-DataLoader

I used make_label_and_filename_txt.py to generate the txt files.

My output is:

Number of train samples:  1500
Number of test samples:  400
Training Phase: Epoch: [ 0][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 1][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 2][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 3][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 4][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 5][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 6][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 7][ 0/ 3]      Iteration Loss: 0.000
Training Phase: Epoch: [ 8][ 0/ 3]      Iteration Loss: 0.000

@jiangqy
Copy link
Owner

jiangqy commented Mar 6, 2018

Well, it's my mistake. Actually the MultiLabelMarginLoss is a sample-based loss rather than batch-based. Hence the loss.data[0] / train_labels.size(0) should be loss.data[0] in 'multi_label_classifier.py', line 97.

Furthermore, you can change output format '%.3f' to '%f' to verify the loss value.

@kettenfett
Copy link
Author

I made the changes you suggested, as can be seen here: https://github.com/philippHRO/Customized-DataLoader/blob/master/multi_label_classifier.py

It's still showing zero loss.

@kettenfett
Copy link
Author

Hey I got it to work. I switched to MSELoss and converted the labels to TorchFloat tensor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants