-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About Training Parallelism #10
Comments
The model was trained on a RTX 2080Ti with 11GB memory. Few things you can check:
|
Thanks for the reply! However, I still could not proceed with the level-256 training. I am using the latest version of BlenderProc, so I suspect it was because the format of my generated data is different from yours (which may influence the performance of level-64 and 128). I am still figuring out why. I inspected the 3D-FRONT dataset and read the code of your forked BlenderProc. There is something I am still wondering about:
|
Hi. I wanted to train the model on my own dataset, but I found my cuda memory runs out when processing occupancy_256 prediction. I tried nn.DataParallel to try to run the model on multiple GPUs, but it raises such an error:
I searched for this error and found out it was an unresolved issue of MinkowskiEngine (link here). I wonder how you trained the model on your computer, and could you please be kind to inform other possible solutions to make it work? Thank you!
The text was updated successfully, but these errors were encountered: