You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to use a pre-trained model for inference, but GPUs don't seem to have enough memory. I was wondering how much memory is necessary for inference with EquiformerV2 (up to hundreds of atoms in a single structure)? I noticed in the EquiformerV2 paper that multiple V100s with 32GB of memory are used as training GPUs. Is this level of memory necessary for inference?
Hope your reply!
The text was updated successfully, but these errors were encountered:
Hi @Ramblekiss , I believe a 31M EquiformersV2 should be able to run inference (in FP32) on 32GB GPU with a few hundred atoms (depending of course on the size of the radius graph you choose). In the EQv2 paper, training did not use any model parallelism techniques so the number of GPUs had no impact on the size of the model/number of atoms you can fit. I would try it and report back!
What would you like to report?
Hi Fair-chemers!
I'm trying to use a pre-trained model for inference, but GPUs don't seem to have enough memory. I was wondering how much memory is necessary for inference with EquiformerV2 (up to hundreds of atoms in a single structure)? I noticed in the EquiformerV2 paper that multiple V100s with 32GB of memory are used as training GPUs. Is this level of memory necessary for inference?
Hope your reply!
The text was updated successfully, but these errors were encountered: