Skip to content

Official code for ICCVW accepted paper TP-NoDe

License

Notifications You must be signed in to change notification settings

TejasAnvekar/TP-NoDe

This branch is 4 commits behind Akash-Kumbar/TP-NoDe:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

a854586 · Dec 18, 2023

History

10 Commits
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Aug 19, 2023
Dec 18, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023
Oct 7, 2023

Repository files navigation

TP-NoDe

Code for ICCVW-2023 accepted paper TP-NoDe: Topology aware Progressive Noising and Denoising of Point Clouds

Akash Kumbar, Tejas Anvekar, Tulasi Amitha Vikrama, Ramesh Ashok Tabib, Uma Mudenagudi

[Paper]

Abstract

In this paper, we propose TP-NoDe, a novel Topology-aware Progressive Noising and Denoising technique for 3D point cloud upsampling. TP-NoDe revisits the traditional method of upsampling of the point cloud by introducing a novel perspective of adding local topological noise by incorporating a novel algorithm Density-Aware k nearest neighbour (DA-kNN) followed by denoising to map noisy perturbations to the topology of the point cloud. Unlike previous methods, we progressively upsample the point cloud, starting at a 2 X upsampling ratio and advancing to a desired ratio. TP-NoDe generates intermediate upsampling resolutions for free, obviating the need to train different models for varying upsampling ratios. TP-NoDe mitigates the need for task-specific training of upsampling networks for a specific upsampling ratio by reusing a point cloud denoising framework. We demonstrate the supremacy of our method TP-NoDe on the PU-GAN dataset and compare it with state-of-the-art upsampling methods.

Installation

  • Install the following packages
python==3.8.16
torch==1.13.1
CUDA==11.6
numpy==1.21.2
open3d==0.17.0
einops==0.3.2
scikit-learn==1.0.1
tqdm==4.62.3
h5py==3.6.0
torch-cluster

Install torch-cluster using pip install --verbose --no-cache-dir torch-cluster [https://pytorch-geometric.readthedocs.io/en/1.3.2/notes/installation.html]


Also, for denoising we use score based denoising, install their packages to run this code (please follow score-denoise)

  • Compile the evaluation_code for metric calculation (optional)

To calculate the CD, HD and P2F metrics, you need to install the CGAL library (please follow the PU-GAN repo) and virtual environment of PU-GCN (please follow the PU-GCN repo) first. And then you also need to compile the evaluation_code folder.

cd evaluation_code
bash compile.sh

Data Preparation

The code intakes mesh files and random samples it to mentioned number of points in the code. So, no extra pre-processing required.

For benchmarking we use PU-GAN dataset(train set, test mesh)

To run the code as is, prepare a 'data' folder like this:

data  
├───test

Running the code

To run the code:

#The noise hyper-parameters can be changed accordingly (refer to the bash scripts)
python upSampleWithNoise.py --noising global --upsampling_factor 4 --patch_size 64 --seed_k 3 --noise_type Laplacian --save_path data/Final/Global/Laplacian/PS64/

Re-Create Results and ablations

#We have broken down it to two sh files
sh run_all.sh
sh run_allExps.sh

Acknowledgment

Our methodology wholly depends on score-based denoising network and we use their pre-trained weights:

Score-based denoising of Point Clouds.

BibTeX

Please cite our paper if it is helpful to your research:

@inproceedings{kumbar2023tp,
    title={TP-NoDe: Topology-Aware Progressive Noising and Denoising of Point Clouds Towards Upsampling},
    author={Kumbar, Akash and Anvekar, Tejas and Vikrama, Tulasi Amitha and Tabib, Ramesh Ashok and Mudenagudi, Uma},
    booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
    pages={2272--2282},
    year={2023}
}

About

Official code for ICCVW accepted paper TP-NoDe

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 82.4%
  • Shell 17.6%