-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate and test using RTX 4070 12GB #15
Comments
You don't have to necessarily use an RTX 3090 GPU. However, you have to keep in mind that the 3090 comes with 24 GB of vRAM, which is twice as much as your GPU. You can try to (i) reduce the batch size, or (ii) reduce the image size. Let me know how it goes! |
Hello,
|
Hi,
Thanks for sharing your results. It is very likely that the image size will
affect the performance of the method. For a fair comparison, you could
compare to baselines using the same image size.
|
Hello, |
Hello,
I would like to ask, can i use an RTX 4070 12GB for evaluation and testing? When i am trying to run the mean-teacher adaptation that is provided in this repositiry i get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 11.72 GiB total capacity; 9.59 GiB already allocated; 47.88 MiB free; 9.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
Can i do something about it or should i use RTX 3090 GPU?
Thank you for your time!
The text was updated successfully, but these errors were encountered: