Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimal Code Changes to Support Latest PyTorch and Bug Fixed for Extremely Low Adaptation Accuracy #29

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

yuhui-zh15
Copy link

@yuhui-zh15 yuhui-zh15 commented Apr 12, 2022

Thanks for contributing this repo, which is really nice to learn domain adaptation.

  1. Just made some minimal code changes to support latest PyTorch (>= 1.0) and Python (>= 3.6) (0f98f5d).

  2. Fixed the low adaptation accuracy (10%-15%) mentioned in Could you verify the implementation? Acc = 11% after domain adaptation #27 RNN network invalid #26 d_loss and accuracy #22 Adaptation leads to lower precision. #15 0% accuracy with pytorch >= 0.4.0 #10 accuracy just 13% after adaptation? #8 Thanks ! + testing the "src_only" baseline ... #7 How to tune the training parameters to rise the accuracy ? #1. The bug is due to the different normalization applied to MNIST and USPS. The data loader normalizes all the MNIST images to 0-1, while normalizing all the USPS images to 0-255. Changing the latter to 0-1 leads to normal performance (13a295a):

=== Evaluating classifier for source domain === (100 epochs)
Avg Loss = 0.08870776686510162, Avg Accuracy = 99.250001%

=== Evaluating classifier for encoded target domain === (100 epochs)
>>> source only <<<
Avg Loss = 0.6725219937139436, Avg Accuracy = 87.956989%
>>> domain adaption <<<
Avg Loss = 0.49638841790887883, Avg Accuracy = 97.365594%
  1. Fixed the outputs of the discriminator mentioned in Question about the loss #16 Criterion problem #11 Losses #2. For nn.CrossEntropyLoss, the input is expected to contain raw, unnormalized scores for each class, rather than logprobs (6e59bbd). However, the performance is not improved after correction.
=== Evaluating classifier for encoded target domain === (100 epochs)
>>> source only <<<
Avg Loss = 0.6533445982556594, Avg Accuracy = 87.956989%
>>> domain adaption <<<
Avg Loss = 0.46242921638726453, Avg Accuracy = 97.204304%

@yuhui-zh15 yuhui-zh15 changed the title Minimal Code Changes to Support Latest PyTorch Minimal Code Changes to Support Latest PyTorch and Bug Fixed for Extremely Low Adaptation Accuracy Apr 12, 2022
@goodzhangbobo
Copy link

goodzhangbobo commented Dec 30, 2022

thank you very much. this is a big help.
I take a test. all is right in this version.
https://github.com/yuhui-zh15/pytorch-adda

Epoch [1999/2000] Step [100/149]:d_loss=0.24804 g_loss=4.08300 acc=0.89000
Epoch [2000/2000] Step [100/149]:d_loss=0.24628 g_loss=4.73108 acc=0.89000
=== Evaluating classifier for encoded target domain ===
>>> source only <<<
Avg Loss = 1.1622806254186129, Avg Accuracy = 84.408605%
>>> domain adaption <<<
Avg Loss = 0.4525440482655924, Avg Accuracy = 97.634411%

@mashaan14
Copy link

I think what causes the low adaptation accuracy is that the class labels are swapped by the target encoder. This makes sense because it is an unsupervised task and the target encoder didn't see the class labels.

I've used this code on 2D data:
https://github.com/mashaan14/ADDA-toy

You can see in the attached image that the target encoder separates the classes well. But the class labels were swapped.

Testing target data using target encoder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants