Pytorch official implementation of Anime to Real Clothing: Cosplay Costume Generation via Image-to-Image Translation. (https://arxiv.org/abs/2008.11479)
- Anaconda 3
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
python train.py --project_name cosplay_synthesis --dataset DATASET
DATASET
├── train
│ ├── a
│ │ ├── 0.png
│ │ ├── 1.png
│ │ ︙
│ | └── n.png
│ └── b
│ ├── 0.png
│ ├── 1.png
│ ︙
│ └── n.png
└── test
├── a
│ ├── 0.png
│ ├── 1.png
│ ︙
| └── n.png
└── b
├── 0.png
├── 1.png
︙
└── n.png
Add continue_train option, and you can control starting epoch and resolution.
Basically, model load from latest checkpoints. However, you can choose number of epoch if you use --load_epoch
option.
--continue_train --start_epoch 47 --start_resolution 256
python test.py --model_path models/pretrained_model.pth --input_dir dataset/test/a --output_dir result
You can download pre-trained model from models/pretrained_unet_20200122.pth
We recommend using an anime character image with a simple background as the input image.