This section will show how to train existing models on supported datasets. The following training environments are supported:
- CPU
- single GPU
- single node multiple GPUs
- multiple nodes
You can also manage jobs with Slurm.
Important:
- You can change the evaluation interval during training by modifying the
train_cfg
astrain_cfg = dict(val_interval=10)
. That means evaluating the model every 10 epochs. - The default learning rate in all config files is for 8 GPUs.
According to the Linear Scaling Rule,
you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU,
e.g.,
lr=0.01
for 8 GPUs * 1 img/gpu and lr=0.04 for 16 GPUs * 2 imgs/gpu. - During training, log files and checkpoints will be saved to the working directory,
which is specified by CLI argument
--work-dir
. It uses./work_dirs/CONFIG_NAME
as default. - If you want the mixed precision training, simply specify CLI argument
--amp
.
The model is default put on cuda device.
Only if there are no cuda devices, the model will be put on cpu.
So if you want to train the model on CPU, you need to export CUDA_VISIBLE_DEVICES=-1
to disable GPU visibility first.
More details in MMEngine.
CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [optional arguments]
An example of training the VID model DFF on CPU:
CUDA_VISIBLE_DEVICES=-1 python tools/train.py configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py
If you want to train the model on single GPU, you can directly use the tools/train.py
as follows.
python tools/train.py ${CONFIG_FILE} [optional arguments]
You can use export CUDA_VISIBLE_DEVICES=$GPU_ID
to select the GPU.
An example of training the MOT model ByteTrack on single GPU:
CUDA_VISIBLE_DEVICES=2 python tools/train.py configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py
We provide tools/dist_train.sh
to launch training on multiple GPUs.
The basic usage is as follows.
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
If you would like to launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs, you need to specify different ports (29500 by default) for each job to avoid communication conflict.
For example, you can set the port in commands as follows.
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
An example of training the SOT model SiameseRPN++ on single node multiple GPUs:
bash ./tools/dist_train.sh configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py 8
If you launch with multiple machines simply connected with ethernet, you can simply run following commands:
On the first machine:
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
On the second machine:
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
Usually it is slow if you do not have high speed networking like InfiniBand.
Slurm is a good job scheduling system for computing clusters.
On a cluster managed by Slurm, you can use slurm_train.sh
to spawn training jobs.
It supports both single-node and multi-node training.
The basic usage is as follows.
bash ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR} ${GPUS}
An example of training the YOLOX detector on SHIFT clear-daytime with Slurm:
PORT=29501 \
GPUS_PER_NODE=8 \
SRUN_ARGS="--quotatype=reserved" \
bash ./tools/slurm_train.sh \
mypartition \
YOLOX_shift_clear_daytime \
configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py \
./work_dirs/YOLOX_shift_clear_daytime \
8
This section will show how to test existing models on supported datasets. The following testing environments are supported:
- CPU
- single GPU
- single node multiple GPUs
- multiple nodes
You can also manage jobs with Slurm.
Important:
- You can set the results saving path by modifying the key
outfile_prefix
in evaluator. For example,val_evaluator = dict(outfile_prefix='results/YOLOX_shift_from_clear_daytime')
. Otherwise, a temporal file will be created and will be removed after evaluation. - If you just want the formatted results without evaluation, you can set
format_only=True
. For example,test_evaluator = dict(type='SHIFTVideoMetric', metric='bbox', outfile_prefix='results/YOLOX_shift_from_clear_daytime', format_only=True)
The model is default put on cuda device.
Only if there are no cuda devices, the model will be put on cpu.
So if you want to test the model on CPU, you need to export CUDA_VISIBLE_DEVICES=-1
to disable GPU visibility first.
More details in MMEngine.
CUDA_VISIBLE_DEVICES=-1 python tools/test.py ${CONFIG_FILE} [optional arguments]
An example of testing the YOLOX clear-daytime model on CPU:
CUDA_VISIBLE_DEVICES=-1 python tools/test.py configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py --checkpoint checkpoints/yolox_x_8xb4-24e_shift_clear_daytime.pth
If you want to test the model on single GPU, you can directly use the tools/test.py
as follows.
python tools/test.py ${CONFIG_FILE} [optional arguments]
You can use export CUDA_VISIBLE_DEVICES=$GPU_ID
to select the GPU.
An example of testing the YOLOX clear-daytime model on single GPU:
CUDA_VISIBLE_DEVICES=2 python tools/test.py configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py --checkpoint checkpoints/yolox_x_8xb4-24e_shift_clear_daytime.pth
We provide tools/dist_test.sh
to launch testing on multiple GPUs.
The basic usage is as follows.
bash ./tools/dist_test.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
An example of testing the YOLOX clear-daytime model on single node multiple GPUs:
bash ./tools/dist_test.sh configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py 8 --checkpoint checkpoints/yolox_x_8xb4-24e_shift_clear_daytime.pth
You can test on multiple nodes, which is similar with "Train on multiple nodes".
On a cluster managed by Slurm, you can use slurm_test.sh
to spawn testing jobs.
It supports both single-node and multi-node testing.
The basic usage is as follows.
bash ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${GPUS}
An example of testing the YOLOX clear-daytime model with Slurm:
PORT=29501 \
GPUS_PER_NODE=8 \
SRUN_ARGS="--quotatype=reserved" \
bash ./tools/slurm_test.sh \
mypartition \
YOLOX_clear_daytime \
configs/source/yolox/amp_yolox_x_8xb4-24e_shift_clear_daytime.py \
8 \
--checkpoint checkpoints/yolox_x_8xb4-24e_shift_clear_daytime.pth