This project focuses on end-to-end oBject detection system for Edge Devices (BED). BED integrates a deep nerual network (DNN) practiced on MAX78000 with I/O devices, as illustrated in the following figure. The DNN model for the detection is deployed on MAX78000; and the I/O devices include a camera and a screen for image acquisition and output exhibition, respectively.
This codebase is mainly contributed by Guanchu Wang, Zaid Pervaiz Bhat, Zhimeng Jiang, Yi-Wei Chen, Minhao Fan, Ruzhe Wei, Daochen Zha, and Alfredo Costilla Reyes
Before training the model, it is necessary to clone and install the envoironment of ai8x-training. Once finishing the installation, copy this repo to the root directory of your local ai8x-training, and use this command to train a model:
conda activate ai8x-training
python train/YOLO_V1_Train_QAT.py
Before the synthesis of the pretrained model, it is necessary to clone and install the environment of ai8x-synthesis in a different branch.
Once you finished the installation, it is required to add the following files to the local directory of ai8x-synthesis:
- Put sample_yolov1.npy into the directory ./test/ of ai8x-synthesis.
- Put quantize_yolov1.sh into the directory ./scripts/ of ai8x-synthesis.
- Put yolo-224-hwc-ai85_MXIM.yaml into the directory ./networks/ of ai8x-synthesis.
- Put gen-demos-max78000-yolov1.sh into the directory ./ of ai8x-synthesis
With all the above steps finished, you can use this command to quantize the pretrained model:
conda activate ai8x-synthesis
sh ./scripts/quantize_yolov1.sh
You can download the pretrained model in Yolov1_checkpoint.pth.tar.
After the quantization, you can use this command to synthesize the pretrained model:
sh gen-demos-max78000-yolov1.sh
The quantized model is available in Yolov1_checkpoint-q.pth.tar.
You can also find C codes of the network in yolov1.tar.gz.
Please follow the tutorial to use BED GUI to deploy the model to the MAX78000.
Please follow the tutorial to use the MAX78000 for real-time object detection.
We focus on the case study for the offline evaluation. The detection results for the randomly selected images from the VOC2007 testing dataset are given as follows:
BED shows the real-time detection results on the screen of the board. Here, we select several results as follows:
For detailed demonstration, please go to see our demo video.
We gratefully acknowledge the technical supports from Maxim Integrated.
If you find this project useful, you can cite this work by:
@misc{wang2022bed,
title={BED: A Real-Time Object Detection System for Edge Devices},
author={Guanchu Wang and Zaid Pervaiz Bhat and Zhimeng Jiang and Yi-Wei Chen and Daochen Zha and Alfredo Costilla Reyes and Afshin Niktash and Gorkem Ulkar and Erman Okman and Xia Hu},
year={2022},
eprint={2202.07503},
archivePrefix={arXiv},
primaryClass={cs.CV}
}