You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Motion Latent Diffusion (MLD) is a **text-to-motion** and **action-to-motion** d
14
14
</p>
15
15
16
16
## 🚩 News
17
-
17
+
-[2023/06/20][MotionGPT](https://github.com/OpenMotionLab/MotionGPT) is released! **A unified motion-language model**. Do all your motion tasks in [MotionGPT](https://github.com/OpenMotionLab/MotionGPT)
18
18
-[2023/03/08] add [the script](https://github.com/ChenFengYe/motion-latent-diffusion/blob/main/scripts/tsne.py) for latent space visualization and [the script](https://github.com/ChenFengYe/motion-latent-diffusion/blob/main/scripts/flops.py) for the floating point operations (FLOPs)
19
19
-[2023/02/28]**MLD got accepted by CVPR 2023**!
20
20
-[2023/02/02] release action-to-motion task, please refer to [the config](https://github.com/ChenFengYe/motion-latent-diffusion/blob/main/configs/config_mld_humanact12.yaml) and [the pre-train model](https://drive.google.com/file/d/1G9O5arldtHvB66OPr31oE_rJG1bH_R39/view)
0 commit comments