Skip to content

Depth-Guided Dynamic Neural Radiance Field using RGB-D data

Notifications You must be signed in to change notification settings

philippwulff/DGD-NeRF

Repository files navigation

DGD-NeRF: Depth-Guided Neural Radiance Field for Dynamic Scenes

DGD-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of dynamic scenes with complex non-rigid geometries. We optimize an underlying deformable volumetric function from a sparse set of input monocular views without the need of ground-truth geometry nor multi-view images.

This project is an extension of D-NeRF improving modelling of dynamic scenes. We thank the authors of NeRF-pytorch, Dense Depth Priors for NeRF and Non-Rigid NeRF from whom be borrow code.

D-NeRF

Installation

git clone https://github.com/philippwulff/D-NeRF.git
cd D-NeRF
conda create -n dgdnerf python=3.7
conda activate dgdnerf
pip install -r requirements.txt

If you want to directly explore the models or use our training data, you can download pre-trained models and the data:

Download Pre-trained Weights. You can download the pre-trained models as logs.zip from here. Unzip the downloaded data to the project root dir in order to test it later. This is what the directory structure looks like:

├── logs 
│   ├── human
│   ├── bottle 
│   ├── gobblet 
│   ├── ...

Dataset

DeepDeform. This is a RGB-D dataset of dynamic scenes with fixed camera poses. You can request access on the project's GitHub page.

Own Data. Download our own dataset as data.zip from here. This is what the directory structure looks like with pretrained weights and dataset:

├── data 
│   ├── human
│   ├── bottle 
│   ├── gobblet 
│   ├── ...
├── logs 
│   ├── human
│   ├── bottle 
│   ├── gobblet 
│   ├── ...

Generate Own scenes Own scenes can be easily generated and integrated. We used an iPad with a Lidar Sensor (App: Record3d --> export Videos as EXR + RGB). Extract the dataset the format which our model expects by running load_owndataset.py (you will need to create a scene configuration in this file).

How to Use It

If you have downloaded our data and the pre-trained weights, you can test our models without training them. Otherwise, you can also train a model with our or your own data from scratch.

Demo

You can use these jupyter notebooks to explore the model.

Description Jupyter Notebook
Synthesize novel views at an arbitrary point in time. (Requires trained model) render.ipynb
Reconstruct the mesh at an arbitrary point in time. (Requires trained model) reconstruct.ipynb
See the camera trajectory the training frames. eda_owndataset_train.ipynb
See the camera poses of novel views. eda_virtual_camera.ipynb
Visualize the sampling along camera rays. (Requires training logs) eda_ray_sampling.ipynb

The followinf instructions use the human scene as an example which can be replaced by the other scenes.

Train

First download the dataset. Then,

conda activate dgdnerf
export PYTHONPATH='path/to/DGD-NeRF'
export CUDA_VISIBLE_DEVICES=0
python run_dgdnerf.py --config configs/human.txt

This command will run the human experiment with the specified args in the config human.txt. Our extensions can be modularly enabled or disabled in human.txt.

Test

First train the model or download pre-trained weights and dataset. Then,

python run_dgdnerf.py --config configs/human.txt --render_only --render_test

This command will render the test set images of the human experiment. When finished, quantitative (metrics.txt) and qualitative results (rgb and depth images/videos) are saved to ./logs/human/renderonly_test_400000.

Render novel view videos

To render novel view images you can use the notebook render.ipynb. To render novel view videos run

python run_dgdnerf.py --config configs/human.txt --render_only --render_pose_type spherical

Select the novel view camera trajectory by setting render_pose_type. Options depend on the scene.