Skip to content

Commit 4924dd6

Browse files
committed
merge latest codes
1 parent 69dc518 commit 4924dd6

File tree

256 files changed

+20769
-1798
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

256 files changed

+20769
-1798
lines changed

README_en.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
</div>
1111

1212
**Notes:**
13-
- The Licence of **PaddleYOLO** is **[GPL 3.0](LICENSE)**, the codes of [YOLOv5](configs/yolov5),[YOLOv6](configs/yolov6),[YOLOv7](configs/yolov7) and [YOLOv8](configs/yolov8) will not be merged into [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection). Except for these three YOLO models, other YOLO models are recommended to use in [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), **which will be the first to release the latest progress of PP-YOLO series detection model**;
13+
- The Licence of **PaddleYOLO** is **[GPL 3.0](LICENSE)**, the codes of [YOLOv5](configs/yolov5),[YOLOv6](configs/yolov6),[YOLOv7](configs/yolov7) and [YOLOv8](configs/yolov8) will not be merged into [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection). Except for these YOLO models, other YOLO models are recommended to use in [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), **which will be the first to release the latest progress of PP-YOLO series detection model**;
1414
- To use **PaddleYOLO**, **PaddlePaddle-2.3.2 or above is recommended**,please refer to the [official website](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html) to download the appropriate version. **For Windows platforms, please install the paddle develop version**;
1515
- **PaddleYOLO's [Roadmap](https://github.com/PaddlePaddle/PaddleYOLO/issues/44)** issue collects feature requests from user, welcome to put forward any opinions and suggestions.
1616

configs/convnext/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55

66
| 网络网络 | 输入尺寸 | 图片数/GPU | 学习率策略 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | Params(M) | FLOPs(G) | 下载链接 | 配置文件 |
77
| :------------- | :------- | :-------: | :------: | :------------: | :---------------------: | :----------------: |:---------: | :------: |:---------------: |
8-
| PP-YOLOE-tiny ConvNeXt | 640 | 16 | 36e | 44.6 | 63.3 | 33.04 | 13.87 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_convnext_tiny_36e_coco.pdparams) | [配置文件](./ppyoloe_convnext_tiny_36e_coco.yml) |
9-
| YOLOX-s ConvNeXt | 640 | 8 | 36e | 44.6 | 65.3 | 36.20 | 27.52 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_convnext_s_36e_coco.pdparams) | [配置文件](./yolox_convnext_s_36e_coco.yml) |
8+
| PP-YOLOE-ConvNeXt-tiny | 640 | 16 | 36e | 44.6 | 63.3 | 33.04 | 13.87 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_convnext_tiny_36e_coco.pdparams) | [配置文件](./ppyoloe_convnext_tiny_36e_coco.yml) |
9+
| YOLOX-ConvNeXt-s | 640 | 8 | 36e | 44.6 | 65.3 | 36.20 | 27.52 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_convnext_s_36e_coco.pdparams) | [配置文件](./yolox_convnext_s_36e_coco.yml) |
1010
| YOLOv5-s ConvNeXt | 640 | 8 | 36e | 42.4 | 65.3 | 34.54 | 17.96 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov5_convnext_s_36e_coco.pdparams) | [配置文件](./yolov5_convnext_s_36e_coco.yml) |
1111

1212

configs/convnext/ppyoloe_convnext_tiny_36e_coco.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ ConvNeXt:
2929
PPYOLOEHead:
3030
static_assigner_epoch: 12
3131
nms:
32-
nms_top_k: 10000
32+
nms_top_k: 1000
3333
keep_top_k: 300
3434
score_threshold: 0.01
3535
nms_threshold: 0.7

configs/datasets/coco_detection.yml

+12-13
Original file line numberDiff line numberDiff line change
@@ -2,20 +2,19 @@ metric: COCO
22
num_classes: 80
33

44
TrainDataset:
5-
!COCODataSet
6-
image_dir: train2017
7-
anno_path: annotations/instances_train2017.json
8-
dataset_dir: dataset/coco
9-
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
5+
name: COCODataSet
6+
image_dir: train2017
7+
anno_path: annotations/instances_train2017.json
8+
dataset_dir: dataset/coco
9+
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
1010

1111
EvalDataset:
12-
!COCODataSet
13-
image_dir: val2017
14-
anno_path: annotations/instances_val2017.json
15-
dataset_dir: dataset/coco
16-
allow_empty: true
12+
name: COCODataSet
13+
image_dir: val2017
14+
anno_path: annotations/instances_val2017.json
15+
dataset_dir: dataset/coco
1716

1817
TestDataset:
19-
!ImageFolder
20-
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
21-
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'
18+
name: ImageFolder
19+
anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt)
20+
dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path'

configs/datasets/voc.yml

+12-12
Original file line numberDiff line numberDiff line change
@@ -3,19 +3,19 @@ map_type: 11point
33
num_classes: 20
44

55
TrainDataset:
6-
!VOCDataSet
7-
dataset_dir: dataset/voc
8-
anno_path: trainval.txt
9-
label_list: label_list.txt
10-
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
6+
name: VOCDataSet
7+
dataset_dir: dataset/voc
8+
anno_path: trainval.txt
9+
label_list: label_list.txt
10+
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
1111

1212
EvalDataset:
13-
!VOCDataSet
14-
dataset_dir: dataset/voc
15-
anno_path: test.txt
16-
label_list: label_list.txt
17-
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
13+
name: VOCDataSet
14+
dataset_dir: dataset/voc
15+
anno_path: test.txt
16+
label_list: label_list.txt
17+
data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult']
1818

1919
TestDataset:
20-
!ImageFolder
21-
anno_path: dataset/voc/label_list.txt
20+
name: ImageFolder
21+
anno_path: dataset/voc/label_list.txt

configs/focalnet/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## 模型库
44
### FocalNet on COCO
55

6-
| 网络网络 | 输入尺寸| 图片数/GPU | 学习率策略 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载链接 | 配置文件 |
6+
| 网络网络 | 输入尺寸| 图片数/GPU | 学习率策略 | 推理时间(fps) | mAP<sup>val<br>0.5:0.95 | 下载链接 | 配置文件 |
77
| :--------- | :---- | :-------: | :------: | :---------------------: | :----------------: | :-------: |:------: |
88
| PP-YOLOE+ FocalNet-tiny | 640 | 8 | 36e | - | 46.6 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_focalnet_tiny_36e_coco.pdparams) | [配置文件](./ppyoloe_plus_focalnet_tiny_36e_coco.yml) |
99

configs/pphuman/README.md

+84
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
简体中文 | [English](README.md)
2+
3+
# PP-YOLOE Human 检测模型
4+
5+
PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用户可以下载模型进行使用。PP-Human中使用模型为业务数据集模型,我们同时提供CrowdHuman训练配置,可以使用开源数据进行训练。
6+
其中整理后的COCO格式的CrowdHuman数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip),检测类别仅一类 `pedestrian(1)`,原始数据集[下载链接](http://www.crowdhuman.org/download.html)
7+
8+
相关模型的部署模型均在[PP-Human](../../deploy/pipeline/)项目中使用。
9+
10+
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载 | 配置文件 |
11+
|:---------|:-------:|:------:|:------:| :----: | :------:|
12+
|PP-YOLOE-s| CrowdHuman | 42.5 | 77.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_s_36e_crowdhuman.yml) |
13+
|PP-YOLOE-l| CrowdHuman | 48.0 | 81.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_l_36e_crowdhuman.yml) |
14+
|PP-YOLOE-s| 业务数据集 | 53.2 | - | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_36e_pphuman.pdparams) | [配置文件](./ppyoloe_crn_s_36e_pphuman.yml) |
15+
|PP-YOLOE-l| 业务数据集 | 57.8 | - | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_pphuman.pdparams) | [配置文件](./ppyoloe_crn_l_36e_pphuman.yml) |
16+
|PP-YOLOE+_t-aux(320)| 业务数据集 | 45.7 | 81.2 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.pdparams) | [配置文件](./ppyoloe_plus_crn_t_auxhead_320_60e_pphuman.yml) |
17+
18+
19+
**注意:**
20+
- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lr<sub>new</sub> = lr<sub>default</sub> * (batch_size<sub>new</sub> * GPU_number<sub>new</sub>) / (batch_size<sub>default</sub> * GPU_number<sub>default</sub>)** 调整学习率。
21+
- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)
22+
23+
# YOLOv3 Human 检测模型
24+
25+
请参考[Human_YOLOv3页面](./pedestrian_yolov3/README_cn.md)
26+
27+
# PP-YOLOE 香烟检测模型
28+
基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。
29+
30+
| 模型 | 数据集 | mAP<sup>val<br>0.5:0.95 | mAP<sup>val<br>0.5 | 下载 | 配置文件 |
31+
|:---------|:-------:|:------:|:------:| :----: | :------:|
32+
| PP-YOLOE-s | 香烟业务数据集 | 39.7 | 79.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [配置文件](./ppyoloe_crn_s_80e_smoking_visdrone.yml) |
33+
34+
# PP-HGNet 打电话识别模型
35+
基于PP-HGNet模型实现了打电话行为识别,详细可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md#3.3)套件进行训练。此处提供预测模型下载:
36+
37+
| 模型 | 数据集 | Acc | 下载 | 配置文件 |
38+
|:---------|:-------:|:------:| :----: | :------:|
39+
| PP-HGNet | 业务数据集 | 86.85 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | - |
40+
41+
# HRNet 人体关键点模型
42+
人体关键点模型与ST-GCN模型一起完成[基于骨骼点的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。关键点模型采用HRNet模型,关于关键点模型相关详细资料可以查看关键点专栏页面[KeyPoint](../keypoint/README.md)。此处提供训练模型下载链接。
43+
44+
| 模型 | 数据集 | AP<sup>val<br>0.5:0.95 | 下载 | 配置文件 |
45+
|:---------|:-------:|:------:| :----: | :------:|
46+
| HRNet | 业务数据集 | 87.1 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) | [配置文件](./hrnet_w32_256x192.yml) |
47+
48+
49+
# ST-GCN 骨骼点行为识别模型
50+
人体关键点模型与[ST-GCN](https://arxiv.org/abs/1801.07455)模型一起完成[基于骨骼点的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。
51+
ST-GCN模型基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman)完成训练。
52+
此处提供预测模型下载链接。
53+
54+
| 模型 | 数据集 | AP<sup>val<br>0.5:0.95 | 下载 | 配置文件 |
55+
|:---------|:-------:|:------:| :----: | :------:|
56+
| ST-GCN | 业务数据集 | 87.1 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/configs/stgcn_pphuman.yaml) |
57+
58+
# PP-TSM 视频分类模型
59+
基于`PP-TSM`模型完成了[基于视频分类的行为识别](../../deploy/pipeline/docs/tutorials/pphuman_action.md)方案。
60+
PP-TSM模型基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/FightRecognition)完成训练。
61+
此处提供预测模型下载链接。
62+
63+
| 模型 | 数据集 | Acc | 下载 | 配置文件 |
64+
|:---------|:-------:|:------:| :----: | :------:|
65+
| PP-TSM | 组合开源数据集 | 89.06 |[下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/FightRecognition/pptsm_fight_frames_dense.yaml) |
66+
67+
# PP-HGNet、PP-LCNet 属性识别模型
68+
基于PP-HGNet、PP-LCNet 模型实现了行人属性识别,详细可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_attribute.md)。该模型基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-LCNet.md)套件进行训练。此处提供预测模型下载链接.
69+
70+
| 模型 | 数据集 | mA | 下载 | 配置文件 |
71+
|:---------|:-------:|:------:| :----: | :------:|
72+
| PP-HGNet_small | 业务数据集 | 95.4 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | - |
73+
| PP-LCNet | 业务数据集 | 94.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | [配置文件](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml) |
74+
75+
76+
## 引用
77+
```
78+
@article{shao2018crowdhuman,
79+
title={CrowdHuman: A Benchmark for Detecting Human in a Crowd},
80+
author={Shao, Shuai and Zhao, Zijian and Li, Boxun and Xiao, Tete and Yu, Gang and Zhang, Xiangyu and Sun, Jian},
81+
journal={arXiv preprint arXiv:1805.00123},
82+
year={2018}
83+
}
84+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
English | [简体中文](README_cn.md)
2+
# PaddleDetection applied for specific scenarios
3+
4+
We provide some models implemented by PaddlePaddle to detect objects in specific scenarios, users can download the models and use them in these scenarios.
5+
6+
| Task | Algorithm | Box AP | Download | Configs |
7+
|:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:|
8+
| Pedestrian Detection | YOLOv3 | 51.8 | [model](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [config](./pedestrian_yolov3_darknet.yml) |
9+
10+
## Pedestrian Detection
11+
12+
The main applications of pedetestrian detection include intelligent monitoring. In this scenary, photos of pedetestrians are taken by surveillance cameras in public areas, then pedestrian detection are conducted on these photos.
13+
14+
### 1. Network
15+
16+
The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53.
17+
18+
### 2. Configuration for training
19+
20+
PaddleDetection provides users with a configuration file [yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for pedestrian detection:
21+
22+
* num_classes: 1
23+
* dataset_dir: dataset/pedestrian
24+
25+
### 3. Accuracy
26+
27+
The accuracy of the model trained and evaluted on our private data is shown as followed:
28+
29+
AP at IoU=.50:.05:.95 is 0.518.
30+
31+
AP at IoU=.50 is 0.792.
32+
33+
### 4. Inference
34+
35+
Users can employ the model to conduct the inference:
36+
37+
```
38+
export CUDA_VISIBLE_DEVICES=0
39+
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
40+
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
41+
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
42+
--draw_threshold 0.3 \
43+
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
44+
```
45+
46+
Some inference results are visualized below:
47+
48+
![](../../../docs/images/PedestrianDetection_001.png)
49+
50+
![](../../../docs/images/PedestrianDetection_004.png)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
[English](README.md) | 简体中文
2+
# 特色垂类检测模型
3+
4+
我们提供了针对不同场景的基于PaddlePaddle的检测模型,用户可以下载模型进行使用。
5+
6+
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
7+
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
8+
| 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml) |
9+
10+
## 行人检测(Pedestrian Detection)
11+
12+
行人检测的主要应用有智能监控。在监控场景中,大多是从公共区域的监控摄像头视角拍摄行人,获取图像后再进行行人检测。
13+
14+
### 1. 模型结构
15+
16+
Backbone为Dacknet53的YOLOv3。
17+
18+
19+
### 2. 训练参数配置
20+
21+
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改:
22+
23+
* num_classes: 1
24+
* dataset_dir: dataset/pedestrian
25+
26+
### 2. 精度指标
27+
28+
模型在我们针对监控场景的内部数据上精度指标为:
29+
30+
IOU=.5时的AP为 0.792。
31+
32+
IOU=.5-.95时的AP为 0.518。
33+
34+
### 3. 预测
35+
36+
用户可以使用我们训练好的模型进行行人检测:
37+
38+
```
39+
export CUDA_VISIBLE_DEVICES=0
40+
python -u tools/infer.py -c configs/pphuman/pedestrian_yolov3/pedestrian_yolov3_darknet.yml \
41+
-o weights=https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams \
42+
--infer_dir configs/pphuman/pedestrian_yolov3/demo \
43+
--draw_threshold 0.3 \
44+
--output_dir configs/pphuman/pedestrian_yolov3/demo/output
45+
```
46+
47+
预测结果示例:
48+
49+
![](../../../docs/images/PedestrianDetection_001.png)
50+
51+
![](../../../docs/images/PedestrianDetection_004.png)
466 KB
Loading
521 KB
Loading
472 KB
Loading
506 KB
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
_BASE_: [
2+
'../../datasets/coco_detection.yml',
3+
'../../runtime.yml',
4+
'../../yolov3/_base_/optimizer_270e.yml',
5+
'../../yolov3/_base_/yolov3_darknet53.yml',
6+
'../../yolov3/_base_/yolov3_reader.yml',
7+
]
8+
9+
snapshot_epoch: 5
10+
weights: https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams
11+
12+
num_classes: 1
13+
14+
TrainDataset:
15+
!COCODataSet
16+
dataset_dir: dataset/pedestrian
17+
anno_path: annotations/instances_train2017.json
18+
image_dir: train2017
19+
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
20+
21+
EvalDataset:
22+
!COCODataSet
23+
dataset_dir: dataset/pedestrian
24+
anno_path: annotations/instances_val2017.json
25+
image_dir: val2017
26+
27+
TestDataset:
28+
!ImageFolder
29+
anno_path: configs/pphuman/pedestrian_yolov3/pedestrian.json
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
_BASE_: [
2+
'../datasets/coco_detection.yml',
3+
'../runtime.yml',
4+
'../ppyoloe/_base_/optimizer_300e.yml',
5+
'../ppyoloe/_base_/ppyoloe_crn.yml',
6+
'../ppyoloe/_base_/ppyoloe_reader.yml',
7+
]
8+
log_iter: 100
9+
snapshot_epoch: 4
10+
weights: output/ppyoloe_crn_l_36e_crowdhuman/model_final
11+
12+
pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams
13+
depth_mult: 1.0
14+
width_mult: 1.0
15+
16+
num_classes: 1
17+
TrainDataset:
18+
!COCODataSet
19+
image_dir: ""
20+
anno_path: annotations/train.json
21+
dataset_dir: dataset/crowdhuman
22+
data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
23+
24+
EvalDataset:
25+
!COCODataSet
26+
image_dir: ""
27+
anno_path: annotations/val.json
28+
dataset_dir: dataset/crowdhuman
29+
30+
TestDataset:
31+
!ImageFolder
32+
anno_path: annotations/val.json
33+
dataset_dir: dataset/crowdhuman
34+
35+
TrainReader:
36+
batch_size: 8
37+
38+
epoch: 36
39+
LearningRate:
40+
base_lr: 0.001
41+
schedulers:
42+
- !CosineDecay
43+
max_epochs: 43
44+
- !LinearWarmup
45+
start_factor: 0.
46+
epochs: 1
47+
48+
PPYOLOEHead:
49+
static_assigner_epoch: -1
50+
nms:
51+
name: MultiClassNMS
52+
nms_top_k: 1000
53+
keep_top_k: 100
54+
score_threshold: 0.01
55+
nms_threshold: 0.6

0 commit comments

Comments
 (0)