Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difficulty in extracting an accurate mesh in a case study using SuGaR #194

Open
jjldo21 opened this issue May 22, 2024 · 1 comment
Open

Comments

@jjldo21
Copy link

jjldo21 commented May 22, 2024

Hi @Anttwo,

First of all, thank you for the great work done with your research, it is very inspiring!

I am trying to make a reconstruction of a room as accurate as possible using SuGaR. The results I end up getting are not the optimal ones I would like to achieve. In particular, the specifications of the experiments carried out are the following:

  • My own dataset, which can be downloaded from here, consists of 245 images of 3785 x 2122. These images are extracted from a video in which a careful sweep of the entire room was made.
  • A NVIDIA A100-SXM4-40GB GPU is used for training.
  • Camera poses are estimated with COLMAP.
  • A vanilla Gaussian Splatting model with 7000 iterations, as indicated in the Quick Start section of the README file, is optimized by running the gaussian_splatting/train.py script. Please note that in this process I activate the --resolution 1 argument to maintain the original resolution of the images.
  • Next, as also indicated in the Quick Start section of the README file, the train.py script in the root directory is run to optimize the SuGaR model with the parameters -r "sdf" and --square_size 10 (default).

Following the previous indications and considerations, I obtain both the output of the trained Gaussian Splatting model and the mesh of the reconstructed scene. The results acquired, visually, can be observed with the following images:

Gaussian Splatting point_cloud.ply (WebGL 3D Gaussian Splat Viewer):

3D_GaussianSplat_viewer

Reconstructed mesh by SuGaR (Real-Time Viewer):

SuGaR_viewer

Reconstructed mesh by SuGaR (Blender):

blender

As can be seen in the three previous images, the results are promising but not as faultless as I would like. Particularly, in the point cloud of the Gaussian Splatting output I already observed that too many clouds appear in the middle of the room, generating a kind of visible fog, which produces a degradation in the result. Furthermore, the two images that show the reconstruction of the mesh also reveal clear errors (holes and ellipsoidal bumps) in the final result.

In this way, given this case study and with the knowledge you own in this regard, I would like to ask you how you would propose (parameters that you consider important to modify) to improve the final reconstruction to have a mesh much closer to reality.

Thank you for your time and efforts!

@interqhq
Copy link

I got the same result as yours. When import the mesh extracted into blender, there are many bumps or holes...
I checked the point_could.ply generated by 3dgs, it's good.
Anyone have the same problems?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants