Skip to content

Latest commit

 

History

History
106 lines (74 loc) · 4.92 KB

README.md

File metadata and controls

106 lines (74 loc) · 4.92 KB

DFNet: Enhance Absolute Pose Regression with Direct Feature Matching

Shuai Chen, Xinghui Li, Zirui Wang, and Victor Adrian Prisacariu (ECCV 2022)

Project Page | Paper

DFNet

Setup

Installing Requirements

We tested our code based on CUDA11.3+, PyTorch 1.11.0+, and Python 3.7+ using docker.

Rest of dependencies are in requirement.txt

Data Preparation

  • 7-Scenes

We use a similar data preparation as in MapNet. You can download the 7-Scenes datasets to the data/deepslam_data directory.

Or you can use simlink

cd data/deepslam_data && ln -s 7SCENES_DIR 7Scenes

Notice that we additionally computed a pose averaging stats (pose_avg_stats.txt) and manually tuned world_setup.json in data/7Scenes to align the 7Scenes' coordinate system with NeRF's coordinate system. You could generate your own re-alignment to a new pose_avg_stats.txt using the --save_pose_avg_stats configuration.

  • Cambridge Landmarks

You can download the Cambridge Landmarks dataset using this script here. Please also put the pose_avg_stats.txt and world_setup.json to the data/Cambridge/CAMBRIDGE_SCENES like we provided in the source code.

Training

Our method relies on a pretrained Histogram-assisted NeRF model and a DFNet model as we stated in the paper. We have provide example config files in our repo. The followings are examples to train the models.

  • NeRF model
python run_nerf.py --config config_nerfh.txt
  • DFNet model
python run_feature.py --config config_dfnet.txt
  • Direct Feature Matching (DFNetdm)
python train.py --config config_dfnetdm.txt

Evaluation

We provide methods to evaluate our models.

  • To evaluate the NeRF model in PSNR, simply add --render_test argument.
python run_nerf.py --config config_nerfh.txt --render_test
  • To evaluate APR performance of the DFNet model, you can just add --eval --testskip=1 --pretrain_model_path=../logs/PATH_TO_CHECKPOINT. For example:
python run_feature.py --config config_dfnet.txt --eval --testskip=1 --pretrain_model_path=../logs/heads/dfnet/checkpoint.pt
  • Same to evaluate APR performance for the DFNetdm model
python train.py --config config_dfnetdm.txt --eval --testskip=1 --pretrain_model_path=../logs/heads/dfnetdm/checkpoint.pt

Pre-trained model

We provide the 7-Scenes and Cambridge pre-trained models here. Some models have slight better results than our paper reported. We suggest the models to be put in a new directory (./logs/) of the project.

Notice we additionally provided Cambridge's Great Court scene models, although we didn't include the results in our main paper for fair comparisons with other works.

Due to my limited resource, my pre-trained models are trained using 3080ti or 1080ti. I noticed earlier that the model's performance might vary slightly (could be better or worse) when inferencing with different types of GPUs, even using the exact same model. Therefore, all experiments on the paper are reported based on the same GPUs as they were trained.

Acknowledgement

We thank Michael Hobley, Theo Costain, Lixiong Chen, and Kejie Li for their generous discussion on this work.

Most of our code is built upon Direct-PoseNet. Part of our Histogram-assisted NeRF implementation is referenced from the reproduced NeRFW code here. We thank @kwea123 for this excellent work!

Citation

Please cite our paper and star this repo if you find our work helpful. Thanks!

@inproceedings{chen2022dfnet,
  title={DFNet: Enhance Absolute Pose Regression with Direct Feature Matching},
  author={Chen, Shuai and Li, Xinghui and Wang, Zirui and Prisacariu, Victor},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}

This code builds on previous camera relocalization pipelines, namely Direct-PoseNet. Please consider citing:

@inproceedings{chen2021direct,
  title={Direct-PoseNet: Absolute pose regression with photometric consistency},
  author={Chen, Shuai and Wang, Zirui and Prisacariu, Victor},
  booktitle={2021 International Conference on 3D Vision (3DV)},
  pages={1175--1185},
  year={2021},
  organization={IEEE}
}