-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OPM FLOW - GPU ACELERATION #5452
Comments
It is possible both for Nvidia and AMD cards, but note that only parts of the simulator is currently gpu-accelerated. In addition to this, there is no guarantee that running a simulation on your GPU will be faster. As of now you typically need a quite large simulation for GPUs to be faster, and hard simulation cases are usually also better on CPU because more fancy numerical algorithms have been implemented. The main part of OPM currently supported by GPUs is the linear solver. So to make sure I run my simulation with a GPU I provide a json file that describes the linear solver using the
Here the important part is to set the solver to "gpubicgstab", which is a conjugate gradient method. I know there are other ways to run OPM with GPUs, for instance through the BDA bridge, but I am not very familiar with how to do that. Note that for the preconditioners to be found OPM must be built with CUDA available if you want to run on an Nvidia GPU, and built with HIP available + Feel free to ask more questions if this is still unclear:) Edit: if your version of opm is older than september 2024 then the solver should be called |
Hello, I have tried to run the norne model on an NVIDIA A10 GPU, both with the binaries and the modules built from sources, and in both cases the log says the solver is not known.
@multitalentloes: When you say
does that mean we must specify to cmake how to find CUDA? Thank you for your guidance. |
Thank you @yanncaniouoracle for your interest in the GPU support of OPM. The distributed binaries do indeed not support the After recompilation of the source code with proper cmake arguments this will hopefully work. If it does not then please reach out again here and I will help you further. |
@multitalentloes Thanks for your reply. Unfortunately the same error persists. I am using Ubuntu 22.04, NVIDIA driver version 565.57.01, CUDA version 12.7 and flow is the latest version from de repo (2025.04-pre). CMake says for both options that the manually added variables are not used.
Any other idea? |
@yanncaniouoracle, can you provide the commit hash you are using from |
@multitalentloes here are the hashes:
I guess most of the work is related to the |
@yanncaniouoracle I have checked those specific commits on both my machine (AMD GPU) and that of a coworker (Nvidia GPU) without encountering this issue. I wrote this short script to ensure starting from a blank slate. The warning that the variable was not used looks suspicious. It could also be that CUDA is not automatically found on your system and that you should maybe add some paths directly in ccmake, but I primarily think you should have not gotten that warning if the argument was provided correctly. See if you provided it in a similar way to that of the script. It could maybe also be that there are other packages and libraries found on your system that leads our two machines down separate execution paths in the cmake script, though I cannot immediately tell what that would be. |
Since I have been using a fresh VM, CUDA was not in the However, I have seen many deprecation warning during the build. Which version of CUDA would you recommend? |
Great to hear that you got it working! Being on the latest CUDA version (12.Something) is good for now. If you want to read a bit more about the linear solver, and how to adjust the parameters of the GPU precondtioners for extra performance, you may read this blog post, all of our GPU preconditioners are described there. The warnings relate to the |
I am a user of OPM Flow and am interested in optimizing its performance by utilizing the GPU in my system. Could you please confirm if it's possible to activate GPU acceleration for calculations in OPM Flow, and how I can do so? Are there specific settings within the software that I need to adjust?
I appreciate any guidance or additional documentation you can provide on this matter.
Best regards,
Enrique Ramon
The text was updated successfully, but these errors were encountered: