This is an source-free domain adaptation repository based on PyTorch. It was developed by Wenxin Su. If you encounter any issues or have questions, please don't hesitate to contact Wenxin at [email protected]. It is also the official repository for the following works:
- [ARXIV'24]Unified Source-Free Domain Adaptation (LCFD)
- [CVPR'24]Source-Free Domain Adaptation with Frozen Multimodal Foundation Model (DIFO)
- [IJCV'23]Source-Free Domain Adaptation via Target Prediction Distribution Searching (TPDS)
- [NN'22]Semantic consistency learning on manifold for source data-free unsupervised domain adaptation (SCLM)
- [IROS'21]Model Adaptation through Hypothesis Transfer with Gradual Knowledge Distillation (GKD)
This repository is also supports the following methods:
- Source, SHOT, NRC, COWA, AdaContrast, PLUE
We encourage contributions! Pull requests to add methods are very welcome and appreciated.
-
[ARXIV'24]Proxy Denoising for Source-Free Domain Adaptation (ProDe) and the code will release soon.
-
[ARXIV'24]Unified Source-Free Domain Adaptation (LCFD), and Code
-
[CVPR'24]Source-Free Domain Adaptation with Frozen Multimodal Foundation Model, Code, and Chinese version
-
[IJCV'23]Source-Free Domain Adaptation via Target Prediction Distribution Searching and Code
-
[TMM'23]Progressive Source-Aware Transformer for Generalized Source-Free Domain Adaptation and Code
-
[CAAI TRIT'22]Model adaptation via credible local context representation and Code
-
[NN'22]Semantic consistency learning on manifold for source data-free unsupervised domain adaptation and Code
-
[IROS'21]Model Adaptation through Hypothesis Transfer with Gradual Knowledge Distillation and Code
To use the repository, we provide a conda environment.
conda update conda
conda env create -f environment.yml
conda activate sfa
- Datasets
office-31
Office-31office-home
Office-HomeVISDA-C
VISDA-Cdomainnet126
DomainNet (cleaned)imagenet_a
ImageNet-Aimagenet_r
ImageNet-Rimagenet_v2
ImageNet-V2imagenet_k
ImageNet-Sketch
You need to download the above dataset,modify the path of images in each '.txt' under the folder './data/'.In addition, class name files for each dataset also under the folder './data/'.The prepared directory would look like:
├── data
├── office-home
├── amazon_list.txt
├── classname.txt
├── dslr_list.txt
├── webcam_list.txt
├── office-home
├── Art_list.txt
├── classname.txt
├── Clipart_list.txt
├── Product_list.txt
├── RealWorld_list.txt
... ...
For the ImageNet variations, modify the ${DATA_DIR}
in the conf.py
to your data directory where stores the ImageNet variations datasets.
We provide config files for experiments.
- For office-31, office-home and VISDA-C, there is an example to training a source model :
CUDA_VISIBLE_DEVICES=0 python image_target_of_oh_vs.py --cfg "cfgs/office-home/source.yaml" SETTING.S 0
-
For domainnet126, we follow AdaContrast to train the source model.
-
For adapting to ImageNet variations, all pre-trained models available in Torchvision or timm can be used.
-
We also provide the pre-trained source models which can be downloaded from here.
After obtaining the source models, modify the ${CKPT_DIR}
in the conf.py
to your source model directory. For office-31, office-home and VISDA-C, simply run the following Python file with the corresponding config file to execute source-free domain adaptation.
CUDA_VISIBLE_DEVICES=0 python image_target_of_oh_vs.py --cfg "cfgs/office-home/difo.yaml" SETTING.S 0 SETTING.T 1
For domainnet126 and ImageNet variations.
CUDA_VISIBLE_DEVICES=0 python image_target_in_126.py --cfg "cfgs/domainnet126/difo.yaml" SETTING.S 0 SETTING.T 1