-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Killed when the data maybe too big #134
Comments
We'd welcome a pull request that fixes/optimizes any memory issue that might be present in OpenSplat. 🙏 |
I don't know if relates to your problem, but I was trying to use a lidar point cloud as the initial point cloud for splatting, and would get a runtime errror. A bit of sleuthing suggested that the issue was nanoflann, an it appeared to be getting to the recursion limit of the system. An older version of nanoflann (1.3.0) was able to deal with my point cloud without problem - but meant a small amount of modification to opensplat to use. |
We are also having this issue |
Same issue here. I have a colmap dataset with 3564 images. OpenSplat seems to attempt to load all images into memory at once. After committing ~160Gb to memory, the process shuts down with a cv::OutOfMemoryError message. Switching to CPU instead of cuda does not make any difference, neither does setting a downscale factor make a difference in this behavior. Is this a expected limitation of OpenSplat or a unexpected error? |
You need more RAM, currently. We'd welcome a PR that improves memory management. |
I don't know if I've tested it, I used a large-scale data (colmap), put it into OpenSplat, nerfstudio's nerfacto and 3d_gaussian_splatting respectively, and only OpenSplat was killed when importing, nerfstudio trained normally, which factor formed such a difference? Will a fix be attempted in the future?
The text was updated successfully, but these errors were encountered: