Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
nayariml authored Oct 4, 2021
1 parent ca2e074 commit bc6d297
Showing 1 changed file with 14 additions and 43 deletions.
57 changes: 14 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,72 +1,46 @@
# Learning 3D Human Dynamics from Video

Angjoo Kanazawa*, Jason Zhang*, Panna Felsen*, Jitendra Malik
This project is a modified version from the original project [Project Page](https://akanazawa.github.io/human_dynamics/). Angjoo Kanazawa*, Jason Zhang*, Panna Felsen*, Jitendra Malik, University of California, Berkeley.

University of California, Berkeley
(* Equal contribution)

[Project Page](https://akanazawa.github.io/human_dynamics/)
This fork includes the input of 2D files from OpenPose as external system. Make sure you have the AlphaPose and OpenPose installed and update the path to them.

![Teaser Image](resources/overview.jpg)

### Requirements
- Python 3 (tested on version 3.5)
- [TensorFlow](https://www.tensorflow.org/) (tested on version 1.8)
### Requirements Updated

- Python 3 (tested on version 3.6)
- [TensorFlow](https://www.tensorflow.org/) (tested on version 2.0)
- [PyTorch](https://pytorch.org/) for AlphaPose, PoseFlow, and NMR (tested on
version 0.4.0)
version 1.1.3)
- [AlphaPose/PoseFlow](https://github.com/akanazawa/AlphaPose)
- [Neural Mesh Renderer](https://github.com/daniilidis-group/neural_renderer)
for rendering results. See below.
- [CUDA](https://developer.nvidia.com/cuda-downloads) (tested on CUDA 9.0 with Titan 1080 TI)
- ffmpeg (tested on version 3.4.4)
- [CUDA](https://developer.nvidia.com/cuda-downloads) (tested on CUDA 11.2 with GeForce 940MX)
- ffmpeg (tested on version 4.1.3)

There is currently no CPU-only support.

### License
Please note that while our code is under BSD, the SMPL model and datasets we use have their own licenses that must be followed.

### Contributions
- Windows build and Unity port. Thanks George @ZjuSxh! https://github.com/Zju-George/human_dynamics

### Installation

#### Setup virtualenv
```
virtualenv venv_hmmr -p python3
source venv_hmmr/bin/activate
pip install -U pip
pip install numpy # Some of the required packages need numpy to already be installed.
deactivate
source venv_hmmr/bin/activate
pip install -r requirements.txt
```
Tested in Conda environment with python 3.6

Follow all the instruction to install the original project:

#### Install External Dependencies.
Neural Mesh Renderer and AlphaPose for rendering results:
```
cd src/external
sh install_external.sh
```

The above script also clones my fork of [AlphaPose/PoseFlow](https://github.com/akanazawa/AlphaPose),
which is necessary to run the demo to extract tracks of people in videos. Please
follow the directions in [the installation](https://github.com/akanazawa/AlphaPose/tree/pytorch#installation),
in particular running `pip install -r requirements.txt` from
`src/external/AlphaPose` and downloading the trained models.

If you have a pre-installed version of AlphaPose, symlink the directory in
`src/external`.
The only change that my fork has is a very minor modification in
AlphaPose/pytorch branch's `demo.py`: see [this commit](https://github.com/akanazawa/AlphaPose/commit/ed9cd3c458f1e61145c1b10f87bd37cba53233cd),
copy over the changes in `demo.py`.

Install the latest version of [AlphaPose] (https://github.com/MVIG-SJTU/AlphaPose), and [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose)

### Demo

1. Download the pre-trained models (also available on [Google Drive](https://drive.google.com/file/d/1LlF9Nci8OtkqKfGwHLh7wWx1xRs7oSyF/view)). Place the `models` folder as a top-level
directory.

```
wget http://angjookanazawa.com/cachedir/hmmr/hmmr_models.tar.gz && tar -xf hmmr_models.tar.gz
```
Expand All @@ -76,17 +50,14 @@ directory.
```
wget http://angjookanazawa.com/cachedir/hmmr/hmmr_demo_data.tar.gz && tar -xf hmmr_demo_data.tar.gz
```

3. Run the demo. This code runs AlphaPose/PoseFlow for you.
Please make sure AlphaPose can be run on a directory of images if you are having
any issues.

Sample usage:

```
# Run on a single video:
python -m demo_video --vid_path demo_data/penn_action-2278.mp4 --load_path models/hmmr_model.ckpt-1119816
python -m demo_video_openpose --vid_path demo_data/penn_action-2278.mp4 --load_path models/hmmr_model.ckpt-1119816
# If there are multiple people in the video, you can also pass a track index:
python -m demo_video --track_id 1 --vid_path demo_data/insta_variety-tabletennis_43078913_895055920883203_6720141320083472384_n_short.mp4 --load_path models/hmmr_model.ckpt-1119816
Expand Down

0 comments on commit bc6d297

Please sign in to comment.