Skip to content

Commit

Permalink
fix doc links (#2164)
Browse files Browse the repository at this point in the history
  • Loading branch information
Ben-Louis authored and Tau-J committed Apr 6, 2023
1 parent 6036733 commit 3b81734
Show file tree
Hide file tree
Showing 9 changed files with 15 additions and 15 deletions.
4 changes: 2 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
    
<b><font size="5">MMPose 1.0</font></b>
<sup>
<a href="https://mmpose.readthedocs.io/en/latest/overview.html">
<a href="https://mmpose.readthedocs.io/zh_CN/latest/overview.html">
<i><font size="4">TRY</font></i>
</a>
</sup>
Expand Down Expand Up @@ -88,7 +88,7 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb-
- 2022-10-14: MMPose [v0.29.0](https://github.com/open-mmlab/mmpose/releases/tag/v0.29.0) 已经发布,主要更新包括:
- 新增算法 [DEKR](https://arxiv.org/abs/2104.02300) (CVPR'2021). 详情请见 [模型页面](/configs/body/2d_kpt_sview_rgb_img/dekr/coco/hrnet_coco.md)
- 新增算法 [CID](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Contextual_Instance_Decoupling_for_Robust_Multi-Person_Pose_Estimation_CVPR_2022_paper.html) (CVPR'2022). 详情请见 [模型页面](/configs/body/2d_kpt_sview_rgb_img/cid/coco/hrnet_coco.md)
- 2022-09-01: **MMPose v1.0.0** 公测版本已经发布 \[ [Code](https://github.com/open-mmlab/mmpose/tree/1.x) | [Docs](https://mmpose.readthedocs.io/en/latest/) \],欢迎尝试并提出宝贵意见
- 2022-09-01: **MMPose v1.0.0** 公测版本已经发布 \[ [Code](https://github.com/open-mmlab/mmpose/tree/1.x) | [Docs](https://mmpose.readthedocs.io/zh_CN/latest/) \],欢迎尝试并提出宝贵意见
- 2022-02-28: [MMDeploy](https://github.com/open-mmlab/mmdeploy) v0.3.0 支持 MMPose 模型部署
- 2021-12-29: OpenMMLab 开放平台已经正式上线! 欢迎试用基于 MMPose 的[姿态估计 Demo](https://platform.openmmlab.com/web-demo/demo/poseestimation)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,4 @@ Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 da
| [pose_hrnet_w48_udp](/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_384x288_udp.py) | 384x288 | 0.772 | 0.910 | 0.835 | 0.820 | 0.945 | [ckpt](https://download.openmmlab.com/mmpose/top_down/udp/hrnet_w48_coco_384x288_udp-0f89c63e_20210223.pth) | [log](https://download.openmmlab.com/mmpose/top_down/udp/hrnet_w48_coco_384x288_udp_20210223.log.json) |
| [pose_hrnet_w32_udp_regress](/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192_udp_regress.py) | 256x192 | 0.758 | 0.908 | 0.823 | 0.812 | 0.943 | [ckpt](https://download.openmmlab.com/mmpose/top_down/udp/hrnet_w32_coco_256x192_udp_regress-be2dbba4_20210222.pth) | [log](https://download.openmmlab.com/mmpose/top_down/udp/hrnet_w32_coco_256x192_udp_regress_20210222.log.json) |

Note that, UDP also adopts the unbiased encoding/decoding algorithm of [DARK](https://mmpose.readthedocs.io/en/latest/papers/techniques.html#div-align-center-darkpose-cvpr-2020-div).
Note that, UDP also adopts the unbiased encoding/decoding algorithm of [DARK](https://mmpose.readthedocs.io/en/0.x/papers/techniques.html#div-align-center-darkpose-cvpr-2020-div).
4 changes: 2 additions & 2 deletions demo/MMPose_Tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,7 @@
"\n",
"### Add a new dataset\n",
"\n",
"There are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.html#reorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.\n",
"There are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/0.x/tutorials/2_new_dataset.html#reorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.\n",
"\n",
"We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.\n",
"\n"
Expand Down Expand Up @@ -925,7 +925,7 @@
"source": [
"### Create a config file\n",
"\n",
"In the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset."
"In the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/0.x/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions demo/docs/2d_animal_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
We provide a demo script to test a single image, given gt json file.

*Pose Model Preparation:*
The pre-trained pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/latest/topics/animal.html).
The pre-trained pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/0.x/topics/animal.html).
Take [macaque model](https://download.openmmlab.com/mmpose/animal/resnet/res50_macaque_256x192-98f1dd3a_20210407.pth) as an example:

```shell
Expand Down Expand Up @@ -113,7 +113,7 @@ python demo/top_down_video_demo_with_mmdet.py \
**Other Animals**

For other animals, we have also provided some pre-trained animal detection models (1-class models). Supported models can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md).
The pre-trained animal pose estimation model can be found in [pose model zoo](https://mmpose.readthedocs.io/en/latest/topics/animal.html).
The pre-trained animal pose estimation model can be found in [pose model zoo](https://mmpose.readthedocs.io/en/0.x/topics/animal.html).

```shell
python demo/top_down_video_demo_with_mmdet.py \
Expand Down
2 changes: 1 addition & 1 deletion demo/docs/2d_face_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
We provide a demo script to test a single image, given gt json file.

*Face Keypoint Model Preparation:*
The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/latest/topics/face.html).
The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/0.x/topics/face.html).
Take [aflw model](https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth) as an example:

```shell
Expand Down
6 changes: 3 additions & 3 deletions demo/docs/2d_hand_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
We provide a demo script to test a single image, given gt json file.

*Hand Pose Model Preparation:*
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/latest/topics/hand%282d%2Ckpt%2Crgb%2Cimg%29.html).
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/0.x/topics/hand%282d%2Ckpt%2Crgb%2Cimg%29.html).
Take [onehand10k model](https://download.openmmlab.com/mmpose/top_down/resnet/res50_onehand10k_256x256-e67998f6_20200813.pth) as an example:

```shell
Expand Down Expand Up @@ -50,7 +50,7 @@ Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmd

*Hand Box Model Preparation:* The pre-trained hand box estimation model can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md).

*Hand Pose Model Preparation:* The pre-trained hand pose estimation model can be downloaded from [pose model zoo](https://mmpose.readthedocs.io/en/latest/topics/hand%282d%2Ckpt%2Crgb%2Cimg%29.html).
*Hand Pose Model Preparation:* The pre-trained hand pose estimation model can be downloaded from [pose model zoo](https://mmpose.readthedocs.io/en/0.x/topics/hand%282d%2Ckpt%2Crgb%2Cimg%29.html).

```shell
python demo/top_down_img_demo_with_mmdet.py \
Expand Down Expand Up @@ -80,7 +80,7 @@ Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmd

*Hand Box Model Preparation:* The pre-trained hand box estimation model can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md).

*Hand Pose Model Preparation:* The pre-trained hand pose estimation model can be found in [pose model zoo](https://mmpose.readthedocs.io/en/latest/topics/hand%282d%2Ckpt%2Crgb%2Cimg%29.html).
*Hand Pose Model Preparation:* The pre-trained hand pose estimation model can be found in [pose model zoo](https://mmpose.readthedocs.io/en/0.x/topics/hand%282d%2Ckpt%2Crgb%2Cimg%29.html).

```shell
python demo/top_down_video_demo_with_mmdet.py \
Expand Down
2 changes: 1 addition & 1 deletion demo/docs/webcam_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Detailed configurations can be found in the config file.
```

- **Configure pose estimation models**
In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly.
In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/0.x/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly.

```python
# 'TopDownPoseEstimatorNode':
Expand Down
2 changes: 1 addition & 1 deletion docs/zh_cn/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ python demo/top_down_img_demo.py \
--out-img-root vis_results
```

更多实例和细节可以查看 [demo文件夹](/demo)[demo文档](https://mmpose.readthedocs.io/en/latest/demo.html)
更多实例和细节可以查看 [demo文件夹](/demo)[demo文档](https://mmpose.readthedocs.io/en/0.x/demo.html)

## 如何训练模型

Expand Down
4 changes: 2 additions & 2 deletions docs/zh_cn/language.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
## <a href='https://mmpose.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmpose.readthedocs.io/en/0.x/'>English</a>

## <a href='https://mmpose.readthedocs.io/zh_CN/latest/'>简体中文</a>
## <a href='https://mmpose.readthedocs.io/zh_CN/0.x/'>简体中文</a>

0 comments on commit 3b81734

Please sign in to comment.