This is the official repository accompanying the CVPR Workshop paper:
The repo depends upon the following
Python 3.8.5
PyTorch 2.2.1
CUDA 12.1
conda create -n bench-syn-clone python==3.8.5
conda activate bench-syn-clone
conda install pytorch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -r requirements.txt
The pretrained models files that we used in our paper can be downloaded from here.
To evaluate your own, put it in the pretrained_models/Pretrained_Models folder and edit the model.py file.
Please change the dataset_path and save_path in the config/default.yaml file for each evaluation metric.
Please donwload the ImageNet-A and ImageNet-R datasets.
cd calibration
bash scripts/run_multiple.sh
bash scripts/run_clip.sh
Please download the ImageNet-9 dataset.
cd background_bias
bash scripts/run_multiple.sh
bash scripts/run_clip.sh
cd shape_bias
bash scripts/run_multiple.sh
bash scripts/run_clip.sh
Please download the FOCUS dataset and then run,
cd context_bias
bash scripts/run_mutliple.sh
Download the iNaturalist, Places, and SUN datasets.
cd ood_detection
bash scripts/run_multiple.sh
bash scripts/run_clip.sh
Download the 2D-corruptions dataset from https://zenodo.org/records/2235448
cd corruptions
bash scripts/run_multiple_2dcc.sh
bash scripts/run_clip_2dcc.sh
Download the 3D-corruptions dataset from https://datasets.epfl.ch/3dcc/index.html
cd corruptions
bash scripts/run_multiple_3dcc.sh
bash scripts/run_clip_3dcc.sh
We use the Ares package for running our attacks.
cd adversarial_attack/ares/classification
bash scripts/run_multiple_fgsm.sh
bash scripts/run_multiple_pgd.sh
bash scripts/run_clip_fgsm.sh
bash scripts/run_clip_pdg.sh
BibTex
@inproceedings{singh2024synthetic,
title={Is Synthetic Data All We Need? Benchmarking the Robustness of Models Trained with Synthetic Images},
author={Singh, Krishnakant and Navaratnam, Thanush and Holmer, Jannik and Schaub-Meyer, Simone and Roth, Stefan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2505--2515},
year={2024}
}
Krishnakant Singh ([email protected])