This code is implemented base on the original code Yolov3 Tensorflow
. The main purpose is improving the performance and can using on some Edge device
$ git clone https://github.com/buiduchanh/TF_yolov3.git
Two files are required as follows:
xxx/xxx.jpg 18.19,6.32,424.13,421.83,20 323.86,2.65,640.0,421.94,20
xxx/xxx.jpg 48,240,195,371,11 8,12,352,498,14
# image_path x_min, y_min, x_max, y_max, class_id x_min, y_min ,..., class_id
# make sure that x_max < width and y_max < height
person
bicycle
car
...
toothbrush
We provided file kmeans.py for caculate the anchor like coco_anchor.txt
Then edit your ./core/config.py to make some necessary configurations
__C.YOLO.CLASSES = "./data/classes/voc.names"
__C.TRAIN.ANNOT_PATH = "./data/dataset/voc_train.txt"
__C.TEST.ANNOT_PATH = "./data/dataset/voc_test.txt"
If you want to use MobileNetV2 as backbone instead of Darknet53 just set the parameters in config same as below
__C.YOLO.BACKBONE_MOBILE = True
__C.YOLO.GT_PER_GRID = 3
Here are two kinds of training method:
$ python train.py
$ tensorboard --logdir ./data
We will update this result asap
- MobileV2
- DarkNet
- Using Focal loss
- Added Batch Normalize
- Convert model to using in edge device
- Adding channel prunning
- Using Diou loss instead of Giou loss ( increase mAP ~5%)
- Adaptively spatial feature fusion ASFF which increase the mAP ~ 10%
Stronger-Yolo
focal-loss
kl-loss
YOLOv3目标检测有了TensorFlow实现,可用自己的数据来训练
Implementing YOLO v3 in Tensorflow (TF-Slim)
Understanding YOLO