-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About memory use #2
Comments
Hi @xin-xinhanggao, thank you for your query. I have also used the NVIDIA 1080 Ti for training the model, so I am not sure why the error is appearing for you. PS : Did you change the batch size, in train.py, line 109 dataset = DataLoader(data_loader, batch_size=1, num_workers=0, shuffle=True) If yes, then change it to 1 again. Since the sequences already contain 7 images, adding more can result in a memory error. |
Thank you for your timely reply. As you said, I change the batch size as 16 to train the model more effectively. It must be the reason why I failed. But is there any problem by using batch size as 1, which seems may affect the covergence of the model. |
Hi, In the paper, they have used a batch size of 4 sequences, so in any case, 16 is too large. (They trained on an NVIDIA DGX-1 !) |
Thanks for your reply :) |
Hello, I create a little dataset to try the model.
But pytorch reports the error that memory of GPU has run out.
Type of my graphical card is NVIDIA 1080 Ti.
The text was updated successfully, but these errors were encountered: