Skip to content

Latest commit

 

History

History

yolov5

YOLOv5

Abstract

YOLOv5 is a family of object detection architectures and models pretrained on the COCO dataset, representing Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.

Requirements

mindspore ascend driver firmware cann toolkit/kernel
2.3.1 24.1.RC2 7.3.0.1.231 8.0.RC2.beta1

Quick Start

Please refer to the GETTING_STARTED in MindYOLO for details.

Training

View More

- Distributed Training

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run

# distributed training on multiple GPU/Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov5_log python train.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend --is_parallel True

Similarly, you can train the model on multiple GPU devices with the above msrun command. Note: For more information about msrun configuration, please refer to here.

For detailed illustration of all hyper-parameters, please refer to config.py.

Note: As the global batch size (batch_size x num_devices) is an important hyper-parameter, it is recommended to keep the global batch size unchanged for reproduction or adjust the learning rate linearly to a new global batch size.

- Standalone Training

If you want to train or finetune the model on a smaller dataset without distributed training, please run:

# standalone training on a CPU/GPU/Ascend device
python train.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend

Validation and Test

To validate the accuracy of the trained model, you can use test.py and parse the checkpoint path with --weight.

python test.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend --weight /PATH/TO/WEIGHT.ckpt

To validate the accuracy of the trained model for resolution of 1280, you can use test.py and parse the checkpoint path with --weight and parse the image sizes with --img_size.

python test.py --config ./configs/yolov5/yolov5n6.yaml --device_target Ascend --weight /PATH/TO/WEIGHT.ckpt --img_size 1280

Performance

Experiments are tested on Ascend 910* with mindspore 2.3.1 graph mode.

model name scale cards batch size resolution jit level graph compile ms/step img/s map recipe weight
YOLOv5 N 8 32 640x640 O2 377.81s 520.79 491.56 27.4% yaml weights
YOLOv5 S 8 32 640x640 O2 378.18s 526.49 486.30 37.6% yaml weights
YOLOv5 N6 8 32 1280x1280 O2 494.36s 1543.35 165.87 35.7% yaml weights
YOLOv5 S6 8 32 1280x1280 O2 524.91s 1514.98 168.98 44.4% yaml weights
YOLOv5 M6 8 32 1280x1280 O2 572.32s 1769.17 144.70 51.1% yaml weights
YOLOv5 L6 8 16 1280x1280 O2 800.34s 894.65 143.07 53.6% yaml weights
YOLOv5 X6 8 8 1280x1280 O2 995.73s 864.43 74.04 54.5% yaml weights

Experiments are tested on Ascend 910 with mindspore 2.3.1 graph mode.

model name scale cards batch size resolution jit level graph compile ms/step img/s map recipe weight
YOLOv5 N 8 32 640x640 O2 233.25s 650.57 393.50 27.3% yaml weights
YOLOv5 S 8 32 640x640 O2 166.00s 650.14 393.76 37.6% yaml weights
YOLOv5 M 8 32 640x640 O2 256.51s 712.31 359.39 44.9% yaml weights
YOLOv5 L 8 32 640x640 O2 274.15s 723.35 353.91 48.5% yaml weights
YOLOv5 X 8 16 640x640 O2 436.18s 569.96 224.58 50.5% yaml weights

Notes

  • map: Accuracy reported on the validation set.
  • We refer to the official YOLOV5 to reproduce the P5 series model, and the differences are as follows: The single-device batch size is 32. This is different from the official codes.

References

[1] Jocher Glenn. YOLOv5 release v6.1. https://github.com/ultralytics/yolov5/releases/tag/v6.1, 2022.