Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Memory Size Needed about EquiformerV2 #960

Open
Ramblekiss opened this issue Jan 8, 2025 · 1 comment
Open

GPU Memory Size Needed about EquiformerV2 #960

Ramblekiss opened this issue Jan 8, 2025 · 1 comment
Labels
question Further information is requested

Comments

@Ramblekiss
Copy link

What would you like to report?

Hi Fair-chemers!

I'm trying to use a pre-trained model for inference, but GPUs don't seem to have enough memory. I was wondering how much memory is necessary for inference with EquiformerV2 (up to hundreds of atoms in a single structure)? I noticed in the EquiformerV2 paper that multiple V100s with 32GB of memory are used as training GPUs. Is this level of memory necessary for inference?

Hope your reply!

@rayg1234
Copy link
Collaborator

rayg1234 commented Jan 9, 2025

Hi @Ramblekiss , I believe a 31M EquiformersV2 should be able to run inference (in FP32) on 32GB GPU with a few hundred atoms (depending of course on the size of the radius graph you choose). In the EQv2 paper, training did not use any model parallelism techniques so the number of GPUs had no impact on the size of the model/number of atoms you can fit. I would try it and report back!

@lbluque lbluque added the question Further information is requested label Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants