Paper | IJCAI 2018
$ python3 advgan.py --img images/0.jpg --target 4 --model Model_C --bound 0.3
Each of these settings has a separate Generator trained. This code loads appropriate trained model from saved/
directory based on given arguments. As of now there are 22 Generators for different targets, different bounds (0.2 and 0.3) and target models (only Model_C
for now).
$ python3 train_advgan.py --model Model_C --gpu
$ python3 train_advgan.py --model Model_C --target 4 --thres 0.3 --gpu
# thres: Perturbation bound
Use --help
for other arguments available (epochs
, batch_size
, lr
etc.)
$ python3 train_target_models.py --model Model_C
For TensorBoard visualization,
$ python3 generators.py
$ python3 discriminators.py
This code supports only MNIST dataset for now. Same notations as in paper are followed (mostly).
There are few changes that have been made for model to work.
- Generator in paper has
ReLU
on the last layer. If input data is normalized to [-1 1] there wouldn't be any perturbation in the negative region. As expected accuracies were poor (~10% Untargeted). SoReLU
was removed. Also, data normalization had significat effect on performance. With [-1 1] accuracies were around 70%. But with [0 1] normalization accuracies were ~99%. - Perturbations (
pert
) and adversarial images (x + pert
) were clipped. It's not converging otherwise.
These results are for the following settings.
- Dataset - MNIST
- Data normalization - [0 1]
- thres (perturbation bound) - 0.3 and 0.2
- No
ReLU
at the end in Generator - Epochs - 15
- Batch Size - 128
- LR Scheduler -
step_size
5,gamma
0.1 and initiallr
- 0.001
Target | Acc [thres: 0.3] | Acc [thres: 0.2] |
---|---|---|
Untargeted | 0.9921 | 0.8966 |
0 | 0.9643 | 0.4330 |
1 | 0.9822 | 0.4749 |
2 | 0.9961 | 0.8499 |
3 | 0.9939 | 0.8696 |
4 | 0.9833 | 0.6293 |
5 | 0.9918 | 0.7968 |
6 | 0.9584 | 0.4652 |
7 | 0.9899 | 0.6866 |
8 | 0.9943 | 0.8430 |
9 | 0.9922 | 0.7610 |
Pred: 9 | Pred: 3 | Pred: 8 | Pred: 8 | Pred: 4 | Pred: 3 | Pred: 8 | Pred: 3 | Pred: 3 | Pred: 8 |