Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch CUDA memory issue #35

Open
sminy67 opened this issue Nov 17, 2022 · 0 comments
Open

PyTorch CUDA memory issue #35

sminy67 opened this issue Nov 17, 2022 · 0 comments

Comments

@sminy67
Copy link
Contributor

sminy67 commented Nov 17, 2022

When I use a memory-intensive layers like embedding table of DLRM, I face with CUDA out of memory issue. So I came up with model-parallel which may be a solution for this issue.
But it looks like not a fundamental solution of this issue. So can I use the DRAM from CPU side as GPU memory and we only store the most frequently use embedding vectors to GPU?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant