Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory. Tried to allocate 756.00 MiB (GPU 0; 6.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 4.59 GiB reserved in total by PyTorch) #5

Open
909982211 opened this issue Oct 23, 2022 · 2 comments

Comments

@909982211
Copy link

909982211 commented Oct 23, 2022

The GPU i used is RTX 2060 6G...really not enough?

OK,It's my fault,i should use --gpu1(because the 0 is not the Nvidia gpu),but .....an other problem appears when the program was to be end:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

i am sure that my cuda version is correct and it match the pytorch .

@monero778176
Copy link

I used GTX 1650 4G on windows platform, it's work and enough, maybe check the environment setting.

@909982211
Copy link
Author

909982211 commented Oct 29, 2022

I used GTX 1650 4G on windows platform, it's work and enough, maybe check the environment setting.

Now it said:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants