Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Losses #2

Open
redhat12345 opened this issue Jan 16, 2018 · 4 comments
Open

Losses #2

redhat12345 opened this issue Jan 16, 2018 · 4 comments

Comments

@redhat12345
Copy link

In ADDA, classifier loss and advarsarial loss are used. In which file you are using these two losses ?

@aasharma90
Copy link

The author has used nn.CrossEntropyLoss() for both the classification and adversarial loss functions, in the pretrain.py and adapt.py files respectively. Look for "criterion" word in the two files.

@taylover-pei
Copy link

The author has used nn.CrossEntropyLoss() for both the classification and adversarial loss functions, in the pretrain.py and adapt.py files respectively. Look for "criterion" word in the two files.

I still have no idea where the advarsarial loss in the code?
The author used the nn.LogSoftmax() in the adapt.py, and then used the nn.CrossEntropyLoss() in the main.py to train the discriminator. As we know, the nn.CrossEntropyLoss() combines nn.LogSoftmax() and nn.NLLLoss() in one single class, so is this use not repeated?
And other question is the nn.CrossEntropyLoss() and the nn.LogSoftmax() is equivalent to the advarsarial loss?
Thank you very much! Looking forward to your reply.

@aasharma90
Copy link

Hi @taylover-pei

I am assuming that you meant nn.LogSoftMax() in discriminator.py and nn.CrossEntropyLoss() in adapt.py to train the discriminator. Yes, on another look, I think that there is redundancy there. Perhaps, it should have been nn.NLLLoss() in adapt.py then (provided we do not wish to change anything in discriminator.py). The current implementation could be buggy.

For your other question, it's my own understanding that the author has imagined the discriminator as some sort of a binary classifier, since it only needs to discriminate between the source and target features (or classify them into binary outputs, 1 and 0 respectively). So, perhaps he used the standard classification loss for this task. As I said, it's my own understanding and a better answer can only be provided by the author.

Hope it helps!

@taylover-pei
Copy link

Hi @taylover-pei

I am assuming that you meant nn.LogSoftMax() in discriminator.py and nn.CrossEntropyLoss() in adapt.py to train the discriminator. Yes, on another look, I think that there is redundancy there. Perhaps, it should have been nn.NLLLoss() in adapt.py then (provided we do not wish to change anything in discriminator.py). The current implementation could be buggy.

For your other question, it's my own understanding that the author has imagined the discriminator as some sort of a binary classifier, since it only needs to discriminate between the source and target features (or classify them into binary outputs, 1 and 0 respectively). So, perhaps he used the standard classification loss for this task. As I said, it's my own understanding and a better answer can only be provided by the author.

Hope it helps!

Thank you very much. I think you are right. The current implementation could be buggy. It should have been nn.NLLLoss() in adapt.py. And also, when training, using the nn.BCELoss() is OK.
I am a new one in domain adaptation of GAN,there is so many I do not know. Thank you so much to explain there to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants