This repository contains the original code for the methods proposed in Wittich and Rottensteiner, 2021 and Wittich, 2023. The method addresses unsupervised domain adaptation with neural networks based on appearance adaptation.
This implementation is written in python and uses the pytorch library.
This framework can be used to perform supervised training and various variants of unsupervised domain adaptation. On the one side, it can be used to apply joint appearance adaptation (cf. links above), but the framework can also be used to apply variants of instance transfer and adversarial representation transfer.
To use the code, follow these steps:
- Clone the repository.
- Install the python conda environment, defined in
environment.yml
by runningconda env create -f environment.yml
from an anaconda prompt. Alternatively you can install the packages manually. There is no strict dependency on the python or pytorch version. - You may also need to install the jpeg2000 decoder to run the examples.
- Activate the environment by running
conda activate jda
- Navigate inside the
code/
folder and run an experimentpython main.py path/to/config.yaml
This framework uses configuration files to define any experiment.
A complete prototype serving as documentation can be found at code/Documentation/_config_documentation.yaml
.
I recommend to have a look at the examplare configuration files provided in the runs/
folder.
The following processing sequence serves as an example how to use the framework to perform deep domain adaptation. In the example, source training is performed for the city Bochum. The classifier is then adapted to the city Heinsberg. The two cities were selected because they were captured in different seasons which has a strong impact on the appearance of vegetation in the images. Some examples are shown below.
Bochum | Bochum | Heinsberg | Heinsberg |
---|---|---|---|
To perform source training and domain adaptation, follow these steps:
- Download the GeoNRW dataset from here.
- Update the
PATHS.GeoNRW
attribute in the fileruns/local_paths.yaml
. The value should be set to the path of the root folder of the GeoNRW dataset (the next folders should correspond to the city names). You may use~CONFIG
to start a relative path. - Navigate inside the code folder and activate the conda environment by running
conda activate jda
. - Source training:
- Run
python main.py ../runs/source_training/bochum_training.yaml
to train the classifier in the source domain. - Run
python main.py ../runs/source_training/bochum_eval_heinsberg.yaml
to evaluate the classifier in the target domain without adaptation.
- Run
- Evaluating the classifier after source training: Evaluate using the test set of the source domain by running
python main.py ../runs/source_training/bochum_eval.yaml
. Evaluate the classifier without adaptation on the target domain by runningpython main.py ../runs/source_training/bochum_eval_heinsberg.yaml
- Domain adaptation:
- Run
python main.py ../runs/domain_adaptation_b2h/[10... - 60...].yaml
to perform the variants of domain adaptation. - Run
python main.py ../runs/domain_adaptation_b2h/[11... - 61...]_eval.yaml
to evaluate the variants of domain adaptation on the target domain.
- Run
The following images show the results of appearance adaptation using the variant with discriminator regularization.
The results can be reproduced by running python main.py ../runs/domain_adaptation_b2h/20_appa_dis_reg.yaml
(exemplary appearance adaptations will be stored to ../runs/domain_adaptation_b2h/2_appearance_adaptation_dis_reg/images/4_adapted_images/
)..
Input (left) / Adapted image (right) |
---|
Respective results on the test sets.
Startegy | mean F1-Score on target domain [%] | Config |
---|---|---|
Naive transfer | 53.5 | runs/source_training/bochum_training.yaml runs/source_training/bochum_eval_heinsberg.yaml |
Joint adapt. without regularization | 50.1 (-3.4) | runs/domain_adaptation_b2h/10_appa_baseline.yaml runs/domain_adaptation_b2h/11_appa_baseline_eval.yaml |
Joint adapt. with discriminator regularization | 62.8 (+9.3) | runs/domain_adaptation_b2h/20_appa_dis_reg.yaml runs/domain_adaptation_b2h/21_appa_dis_reg_eval.yaml |
Joint adapt. with auxiliary generator | 49.0 (-4.5) | runs/domain_adaptation_b2h/30_appa_aux_gen.yaml runs/domain_adaptation_b2h/31_appa_aux_gen_eval.yaml |
Adaptive batch normalization | 45.6 (-7.9) | runs/domain_adaptation_b2h/40_adaptive_batch_normalization.yaml runs/domain_adaptation_b2h/41_adaptive_batch_normalization_eval.yaml |
Instance transfer | 50.9 (-3.6) | runs/domain_adaptation_b2h/50_instance_transfer.yaml runs/domain_adaptation_b2h/51_instance_transfer_eval.yaml |
Representation transfer | 59.7 (+6.2) | runs/domain_adaptation_b2h/60_representation_transfer.yaml runs/domain_adaptation_b2h/61_representation_transfer_eval.yaml |
The framework allows to run a batch of experiments by specifying a list of configurations in YAML files and putting them to the same folder.
Using the script code/experiment_scheduler.py
all configuration files in a specified folder are used sequentially.
this is useful, if you want to run a batch of experiments with different parameters, e.g. for hyperparameter tuning or to run domain adaptation between multiple domains.
An exemplary setup for hyperparameter tuning is provided in the folder runs/source_training/tuning_example
.
To run the batch of experiments, simply run python experiment_scheduler.py runs/source_training/tuning_example
.
In eval.py
the results of the experiments are presented using the seaborn
library.
The output of the example is shown below.
To use your own domains/datasets, the following steps are required:
- Create a unique name for your domain.
- Implement a training data loader in
datamanagement.py
and add it to the functionprepare_training_dataset
. - Implement a function that pre-loads a subset to (images,labels,names) and add it to the init function of the class
EvalDataset
indatamanagement.py
. - Extend the functions
idmap2color
,color2idmap
anddenorm4print
intools.py
. - Create a configuration file and run it.
I provide a bunch of pre-trained models for various tasks. They are all based on the U-Net architecture with Xception backbone from Segmentation Models Pytorch. The parameter depth refers to the encoder stages (parameter SEG_MODEL.UNET.DEPTH in the config). All checkpoints contain the weights of the model and the optimiser. All models are trained with SGD with momentum, and it is highly recommended to use the same optimiser for fine-tuning. For channel normalization cf. paper [1] (usually normalized to zero mean and unit std-dev).
Link | Input | Output | Depth | GSD | Task |
---|---|---|---|---|---|
LCC_IrRgGH_20cm_5cl | 4 Ch. Ir,R,G,Height | 5 Classes (cf. [1]) | 5 | 20 cm | Land cover classification |
LCC_IrRG_16cm_6cl | 3 Ch. Ir,R,G | 6 Classes (cf. [1] + Water) | 5 | 16 cm | Land cover classification |
BDD_IrRGx2_10m_2cl | 6 Ch. Ir,R,G (x2) | 2 Classes (Def./No Def.) | 4 | 10 m | Bi-temporal deforestation detection |
VL_IrRGx2_10m_3cl | 6 Ch. Ir,R,G (x2) | 3 Classes (No Dmg./Dmg./Clear-cut) | 4 | 10 m | Vitality loss classification |
RLT_IrRG_10m_reg | 3 Ch. Ir,R,G | 1 Channel (regression) | 4 | 10 m | Regression of remaining lifetime |
Besides the documentation in code/Documentation/_config_documentation.yaml
a code prototype is implemented in config.py
.
This copy allows to use auto-completion, to get type hints when coding or to perform refactoring.
If you want to change the configuration file, it is suggestet to first modify the yaml version at code/Documentation/_config_documentation.yaml
.
Afterward, run python config.py
and copy the auto-generated code to the class Config
in config.py
.
If you use this code for your research, please cite our paper
@article{Wittich2021,
title = {Appearance based deep domain adaptation for the classification of aerial images},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {180},
pages = {82-102},
year = {2021},
issn = {0924-2716},
doi = {https://doi.org/10.1016/j.isprsjprs.2021.08.004},
url = {https://www.sciencedirect.com/science/article/pii/S0924271621002045},
author = {D. Wittich and F. Rottensteiner},
keywords = {Domain Adaptation, Pixel-wise Classification, Deep Learning, Aerial Images, Remote Sensing, Appearance Adaptation},
}
and/or the GitHub repository
@misc{Wittich2023,
author = {Dennis Wittich},
title = {Joint Appearance Adaptation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/denniswittich/JointAppearanceAdaptation}}
}
This project is licensed under the terms of the MIT license.
If you have any questions, please contact me via GitHub or Research Gate.
- This work uses the Segmentation Models Pytorch.
- I thank my supervisor Prof. Franz Rottensteiner for his support and the Institute of Photogrammetry and GeoInformation lead by Prof. Christian Heipke for providing the possibility to do this research.
[1] Wittich, D., Rottensteiner, F. (2021): Appearance based deep domain adaptation for the classification of aerial images. In: ISPRS Journal of Photogrammetry and Remote Sensing (180), 82-102. DOI: https://doi.org/10.1016/j.isprsjprs.2021.08.004
[2] Tsai, Y., Hung, W., Schulter, S., Sohn, K., Yang, M. and Chandraker, M., 2018. Learning to adapt structured output space for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7472–7481.
[3] Vu, T.-H., Jain, H., Bucher, M., Cord, M. and Perez, P., 2019. ADVENT: Adversarial entropy minimization for domain adaptation in semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2512–2521.