You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for the great work done with your research, it is very inspiring!
I am trying to make a reconstruction of a room as accurate as possible using SuGaR. The results I end up getting are not the optimal ones I would like to achieve. In particular, the specifications of the experiments carried out are the following:
My own dataset, which can be downloaded from here, consists of 245 images of 3785 x 2122. These images are extracted from a video in which a careful sweep of the entire room was made.
A NVIDIA A100-SXM4-40GB GPU is used for training.
Camera poses are estimated with COLMAP.
A vanilla Gaussian Splatting model with 7000 iterations, as indicated in the Quick Start section of the README file, is optimized by running the gaussian_splatting/train.py script. Please note that in this process I activate the --resolution 1 argument to maintain the original resolution of the images.
Next, as also indicated in the Quick Start section of the README file, the train.py script in the root directory is run to optimize the SuGaR model with the parameters -r "sdf" and --square_size 10 (default).
Following the previous indications and considerations, I obtain both the output of the trained Gaussian Splatting model and the mesh of the reconstructed scene. The results acquired, visually, can be observed with the following images:
Gaussian Splatting point_cloud.ply (WebGL 3D Gaussian Splat Viewer):
Reconstructed mesh by SuGaR (Real-Time Viewer):
Reconstructed mesh by SuGaR (Blender):
As can be seen in the three previous images, the results are promising but not as faultless as I would like. Particularly, in the point cloud of the Gaussian Splatting output I already observed that too many clouds appear in the middle of the room, generating a kind of visible fog, which produces a degradation in the result. Furthermore, the two images that show the reconstruction of the mesh also reveal clear errors (holes and ellipsoidal bumps) in the final result.
In this way, given this case study and with the knowledge you own in this regard, I would like to ask you how you would propose (parameters that you consider important to modify) to improve the final reconstruction to have a mesh much closer to reality.
Thank you for your time and efforts!
The text was updated successfully, but these errors were encountered:
I got the same result as yours. When import the mesh extracted into blender, there are many bumps or holes...
I checked the point_could.ply generated by 3dgs, it's good.
Anyone have the same problems?
Hi @Anttwo,
First of all, thank you for the great work done with your research, it is very inspiring!
I am trying to make a reconstruction of a room as accurate as possible using SuGaR. The results I end up getting are not the optimal ones I would like to achieve. In particular, the specifications of the experiments carried out are the following:
245
images of3785 x 2122
. These images are extracted from a video in which a careful sweep of the entire room was made.NVIDIA A100-SXM4-40GB
GPU is used for training.7000
iterations, as indicated in the Quick Start section of the README file, is optimized by running thegaussian_splatting/train.py
script. Please note that in this process I activate the--resolution 1
argument to maintain the original resolution of the images.train.py
script in the root directory is run to optimize the SuGaR model with the parameters-r "sdf"
and--square_size 10
(default).Following the previous indications and considerations, I obtain both the output of the trained Gaussian Splatting model and the mesh of the reconstructed scene. The results acquired, visually, can be observed with the following images:
Gaussian Splatting
point_cloud.ply
(WebGL 3D Gaussian Splat Viewer):Reconstructed mesh by SuGaR (Real-Time Viewer):
Reconstructed mesh by SuGaR (Blender):
As can be seen in the three previous images, the results are promising but not as faultless as I would like. Particularly, in the point cloud of the Gaussian Splatting output I already observed that too many clouds appear in the middle of the room, generating a kind of visible fog, which produces a degradation in the result. Furthermore, the two images that show the reconstruction of the mesh also reveal clear errors (holes and ellipsoidal bumps) in the final result.
In this way, given this case study and with the knowledge you own in this regard, I would like to ask you how you would propose (parameters that you consider important to modify) to improve the final reconstruction to have a mesh much closer to reality.
Thank you for your time and efforts!
The text was updated successfully, but these errors were encountered: