-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Losses #2
Comments
The author has used |
I still have no idea where the advarsarial loss in the code? |
I am assuming that you meant nn.LogSoftMax() in For your other question, it's my own understanding that the author has imagined the discriminator as some sort of a binary classifier, since it only needs to discriminate between the source and target features (or classify them into binary outputs, 1 and 0 respectively). So, perhaps he used the standard classification loss for this task. As I said, it's my own understanding and a better answer can only be provided by the author. Hope it helps! |
Thank you very much. I think you are right. The current implementation could be buggy. It should have been nn.NLLLoss() in adapt.py. And also, when training, using the nn.BCELoss() is OK. |
In ADDA, classifier loss and advarsarial loss are used. In which file you are using these two losses ?
The text was updated successfully, but these errors were encountered: