https://github.com/VainF/DeepLabV3Plus-Pytorch.git and thanks VainF
VainF did more work in this, but when I want to train with my one dataset, I found a lot of problem, such as: I can't specify a given path for my dataset and pretrained model; I can't train directly with gray pictures because the original code is deeply bound with the VOC data set structure; and so on.
So I make some changes, such as:
- No longer rely on separate data sets (such as VOC). Add some parameters instead
- Add test process
- Add pretrained model reading ways
Some new parameters for train :
jpg_dir: dir full path for jpg files
png_dir: dir full path for mask png files
list_dir: include train.txt and test.txt
save_prediction_dir: full path to save predicted files
Some new parameters for test:
test_dir: dir full path to test image. Support .jpg and .png files
pretrained_backbone_dir: will download and use specified pretrained model if specified
- install requirements
pip install -r requirements.txt
- download dataset(if need)
You need to download additional labels from Dropbox or Tencent Weiyun. Those labels come from DrSleep's repo.
- prepare data like here. you should use more pictures
/datasets
/data
- start visdom(if need)
python -m visdom.server
- start train
python main.py \
--jpg_dir ./datasets/data/JPEGImages \
--png_dir ./datasets/data/SegmentationClassAug \
--list_dir ./datasets/data/Segmentation \
--total_itrs 1 \
--num_classes 21 \
--crop_val --crop_size 513 \
--checkpoints ./results/checkpoints \
--save_prediction_dir ./results/result \
--val_interval 1 --save_val_results \
--enable_vis
--use_ckpt ./results/checkpoints/best_deeplabv3plus_mobilenet_os16.pth \
--save_prediction_dir ./results/predict_result \
--test_only --test_dir ./datasets/data/JPEGImages \
--pretrained_backbone_dir ./models \
--crop_val --crop_size 513