You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have successfully trained ConvONet on my own object datasets before with nice results!
I have used the sample_mesh.py script from ONet to generate pointcloud.npz and points.npz (occupancy points) files.
I would now like to train the network to reconstruct very large scenes, for which I have ground truth meshes for some of them. How do I go about this?
Can I simply provide one points.npz and pointcloud.npz file per scene? How do I make sure to have enough occupancy samples per crop? Should I simply make sure to have 100k occupancy points per crop defined by voxel_size * resolution?
Or do I need to do the cropping myself?
Kind regards
The text was updated successfully, but these errors were encountered:
Hi,
I have successfully trained ConvONet on my own object datasets before with nice results!
I have used the sample_mesh.py script from ONet to generate
pointcloud.npz
andpoints.npz
(occupancy points) files.I would now like to train the network to reconstruct very large scenes, for which I have ground truth meshes for some of them. How do I go about this?
Can I simply provide one
points.npz
andpointcloud.npz
file per scene? How do I make sure to have enough occupancy samples per crop? Should I simply make sure to have 100k occupancy points per crop defined byvoxel_size * resolution
?Or do I need to do the cropping myself?
Kind regards
The text was updated successfully, but these errors were encountered: