Skip to content
/ ml-pgdvs Public

[ICLR 2024] Official implementation of "Pseudo-Generalized Dynamic View Synthesis from a Video"

License

Notifications You must be signed in to change notification settings

apple/ml-pgdvs

Repository files navigation

Pseudo-Generalized Dynamic View Synthesis

ICLR 2024

Pseudo-Generalized Dynamic View Synthesis from a Video, ICLR 2024.
Xiaoming Zhao, Alex Colburn, Fangchang Ma, Miguel Ángel Bautista, Joshua M Susskind, and Alexander G. Schwing.

Table of Contents

Environment Setup

This code has been tested on Ubuntu 20.04 with CUDA 11.8 on NVIDIA A100-SXM4-80GB GPU (driver 470.82.01).

We recommend using conda for virtual environment control and libmamba for a faster dependency check.

# setup libmamba
conda install -n base conda-libmamba-solver -y
conda config --set solver libmamba

# create virtual environment
conda env create -f envs/pgdvs.yaml

conda activate pgdvs
conda install pytorch3d=0.7.4 -c pytorch3d -y

[optional] Run the following to install JAX if you want to

  1. try TAPIR
  2. evaluate with metrics computation from DyCheck
conda activate pgdvs
pip install -r envs/requirements_jax.txt --verbose

To check that JAX is installed correctly, run the following. NOTE: the first import torch is important since it will make sure that JAX finds the cuDNN installed by conda.

conda activate pgdvs
python -c "import torch; from jax import random; key = random.PRNGKey(0); x = random.normal(key, (10,)); print(x)"

Try PGDVS on Video in the Wild

Download Checkpoints

# this environment variable is used for demonstration
cd /path/to/this/repo
export PGDVS_ROOT=$PWD

Since we use third parties's pretrained models, we provide two ways to download them:

  1. Directly download from those official repositories;
  2. Download from our copy for reproducing results in the paper just in case those official repositories's checkpoints are modified in the future.
FLAG_ORIGINAL=1  # set to 0 if you want to download from our copy
bash ${PGDVS_ROOT}/scripts/download_ckpts.sh ${PGDVS_ROOT}/ckpts ${FLAG_ORIGINAL}

Example of DAVIS

We use DAVIS as an example to illustrate how to render novel view from monocular videos in the wild. Please see IN_THE_WILD.md for details.

Benchmarking

Please see BENCHMARK_NVIDIA.md and BENCHMARK_iPhone.md for details about reproducing results on NVIDIA Dynamic Scenes and DyCheck's iPhone Dataset in the paper.

Citation

Xiaoming Zhao, Alex Colburn, Fangchang Ma, Miguel Ángel Bautista, Joshua M Susskind, and Alexander G. Schwing. Pseudo-Generalized Dynamic View Synthesis from a Video. ICLR 2024.

@inproceedings{Zhao2024PGDVS,
  title={{Pseudo-Generalized Dynamic View Synthesis from a Video}},
  author={Xiaoming Zhao and Alex Colburn and Fangchang Ma and Miguel Angel Bautista and Joshua M. Susskind and Alexander G. Schwing},
  booktitle={ICLR},
  year={2024},
}

License

This sample code is released under the LICENSE terms.

Acknowledgements

Our project is not possible without the following ones: