Skip to content

BiFuse++: Self-supervised and Efficient Bi-projection Fusion for 360 Depth Estimation

License

Notifications You must be signed in to change notification settings

fuenwang/BiFusev2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[TPAMI 2022] BiFuse++: Self-supervised and Efficient Bi-projection Fusion for 360 Depth Estimation

This is the official implementation of our TPAMI paper "BiFuse++: Self-supervised and Efficient Bi-projection Fusion for 360 Depth Estimation".

Our implementation is based on Pytorch Lightning. The following features are included:

  1. Multiple GPUs Training (DDP)
  2. Multiple Nodes Training (DDP)
  3. Supervised Depth Estimation
  4. Self-Supervised Depth Estimation
  5. Support both Tensorboard and W&B for logging.

Dependency

Install required packages with the following commands.

conda create -n bifusev2 python=3.9
conda activate bifusev2
pip install pip --upgrade
pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git"

The installation of pytorch3d will take some time.

Training

We provide our training/testing codes for both supervised and self-supervised scenarios.

For supervised scenario, our model is trained on Matterport3D. For self-supervised scenario, we adopt PanoSUNCG for training.

  1. Although we do not provide Matterport3D dataset, we provide a sample dataset which demonstrates the format adopted by our SupervisedDataset.py. You can download the sample from here.
  2. For PanoSUNCG, please contact [email protected] for download links.

To train our approach, please refer to Experiments for more details.

Inference

You can download our pretrained model from here.

To inference the supervised model trained on Matterport3D, you can type the following command:

python run_inference.py --mode supervised --ckpt pretrain/supervised_pretrain.pkl  --img data/mp3d.jpg

To inference the self-supervised model trained on PanoSUNCG, you can type the following command:

python run_inference.py --mode selfsupervised --ckpt pretrain/selfsupervised_pretrain.pkl  --img data/panosuncg.jpg

Notice that "--mode" need to be specified since the inference processes of supervised and self-supervised scenarios are different.

Credits

Our BasePhotometric.py is modified from link.

License

This work is licensed under MIT License. See LICENSE for details.

If you find our code/models useful, please consider citing our paper:

@article{9874253,
  author={Wang, Fu-En and Yeh, Yu-Hsuan and Tsai, Yi-Hsuan and Chiu, Wei-Chen and Sun, Min},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={BiFuse++: Self-Supervised and Efficient Bi-Projection Fusion for 360° Depth Estimation}, 
  year={2023},
  volume={45},
  number={5},
  pages={5448-5460},
  doi={10.1109/TPAMI.2022.3203516}
}

About

BiFuse++: Self-supervised and Efficient Bi-projection Fusion for 360 Depth Estimation

Topics

Resources

License

Stars

Watchers

Forks

Languages