- This is a pytorch implementation of paper Colorful Image Colorization, for EECS 6691 course presentation.
- Notice that in the official repo, only the demo code was uploaded. Other implementation repositories contain errors in loss function, preprocessing and postprocessing, so I rewrite the code using pytorch.
- To my knowledge, it's the only implementation including both training and inference using pytorch
- Model can be trained on both ImageNet and other dataset, including coco.
Link to your dataset (ImageNet or other) using
$ cd data
$ ln -s <your_dataset_root> ./
Specify your target dataset in train.py line 129 and line 130.
You should be very careful about the dataset format. Use defined module as your Dataset
, if your dataset is constructed like ⬇️
|-- root
|-- image1.jpg
|-- image2.jpg
|-- ...
Otherwise specify ImageFolder if the format is like ⬇️
|-- root
|-- folder1
|-- image1.jpg
|-- image2.jpg
|-- folder2
|-- image1.jpg
|-- image2.jpg
|-- ...
Then you can set off to training using
$ python train.py
PS: for other configuration of training, see argument setup
Due to time limit, at this moment the model is under training. According to original paper, the model should be trained for 500k+ iterations, which would spend several days or so. Feedback will be released once the training is finished. But now you can check the loss curves.
Training | Validation |
---|---|
![]() |
![]() |
Open demo.ipynb, choose either to inference with pre-saved model or your trained one.
Examples