This repo contains the official PyTorch implementation of our paper:
NeMo: 3D Neural Motion Fields from Multiple Video Instances of the Same Action
by Kuan-Chieh (Jackson) Wang, Zhenzhen Weng, Maria Xenochristou, Joao Pedro Araujo, Jeffrey Gu, C. Karen Liu, Serena Yeung
(Project Page 🌐 | Paper 📄 | Data 📀)
- Clone this repository.
git clone [email protected]:wangkua1/nemo-cvpr2023.git
- Create a conda environment using the provided environment file.
conda env create -f environment.yml
Then, activate the conda environment using
conda active nemo
- Pip install the missing packages using the provided requirements file.
pip install -r requirements.txt
-
Create a directory
software
. -
Download the required components:
- "V02_05" -- requied by human_body_prior. Follow the original instructions at the VPoser github page.
- "spin_data" -- Follow the original instructions at the SPIN github page.
- "smpl" -- Follow the original instructions at their website.
Alternatively, download them at this link (~0.5GB). Note, we only provide these for the purpose of reproducing our work, please respect the original instruction, license, and copyright.
- Download the dataset from this Google Drive folder (~1GB). You should organize your files into the following structure:
/nemo-cvpr2023
-- /data
| -- /videos
| | -- <ACTION>.<INDEX>.mp4
| | ......
| -- /exps
| | -- /mymocap_<ACTION>
| | | -- /<ACTION>.<INDEX>
| | | -- /<ACTION>.<INDEX>.mp4_gt
| | | -- /<ACTION>.<INDEX>.mp4_openpose
| | ......
| -- /mocap
| | | -- <ACTION>.<INDEX>.pkl
| | ......
| -- opt_cam_IMG_6287.pt
| -- opt_cam_IMG_6289.pt
- Convert the mp4 videos into frames (takes <3min).
python -m scripts.video_to_frames
A example inference script for running NeMo on the "Baseball Pitch" motion on the NeMo-MoCap dataset is provided run_scripts/examples
.
You can run it locally using the following command:
bash run_scripts_examples/nemomocap-example.sh 0
or launching a SLURM job with sbatch
using
bash run_scripts_examples/nemomocap-example.sh 1
If you wish to run NeMo on your own video dataset, refer to Custom Video README.
NeMo is built on many other great works, including VPoser, SPIN, SMPL, HMR, VIBE, DAPA, GLAMR.
If you find this work useful, please consider citing:
@inproceedings{wang2022nemo,
title={NeMo: 3D Neural Motion Fields from Multiple Video Instances of the Same Action},
author={Wang, Kuan-Chieh and Weng, Zhenzhen and Xenochristou, Maria and Araujo, Joao Pedro and Gu, Jeffrey and Liu, C Karen and Yeung, Serena},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
arxiv={2212.13660}
}