Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OPM FLOW - GPU ACELERATION #5452

Open
EnriqueRamonV opened this issue Jun 27, 2024 · 9 comments
Open

OPM FLOW - GPU ACELERATION #5452

EnriqueRamonV opened this issue Jun 27, 2024 · 9 comments

Comments

@EnriqueRamonV
Copy link

I am a user of OPM Flow and am interested in optimizing its performance by utilizing the GPU in my system. Could you please confirm if it's possible to activate GPU acceleration for calculations in OPM Flow, and how I can do so? Are there specific settings within the software that I need to adjust?
I appreciate any guidance or additional documentation you can provide on this matter.

Best regards,
Enrique Ramon

@multitalentloes
Copy link
Contributor

multitalentloes commented Jul 9, 2024

It is possible both for Nvidia and AMD cards, but note that only parts of the simulator is currently gpu-accelerated. In addition to this, there is no guarantee that running a simulation on your GPU will be faster. As of now you typically need a quite large simulation for GPUs to be faster, and hard simulation cases are usually also better on CPU because more fancy numerical algorithms have been implemented.

The main part of OPM currently supported by GPUs is the linear solver. So to make sure I run my simulation with a GPU I provide a json file that describes the linear solver using the --linear-solver=/path/to/file.json option. The json may look something like this:

{
   "tol": "0.01",
   "maxiter": "200",
   "verbosity": "0",
   "solver": "gpubicgstab",
   "preconditioner": {
       "type": "GPUDILU"
   }
}

Here the important part is to set the solver to "gpubicgstab", which is a conjugate gradient method.
You should also select a preconditioner, right now probably "GPUDILU", or "OPMCUILU0".

I know there are other ways to run OPM with GPUs, for instance through the BDA bridge, but I am not very familiar with how to do that.

Note that for the preconditioners to be found OPM must be built with CUDA available if you want to run on an Nvidia GPU, and built with HIP available + CONVERT_CUDA_TO_HIP=ON to make cmake generate the equivalent HIP code that can be run on AMD GPUs.

Feel free to ask more questions if this is still unclear:)

Edit: if your version of opm is older than september 2024 then the solver should be called cubicgstab and the preconditioners should typically start with "CU" instead of "GPU"

@yanncaniouoracle
Copy link

yanncaniouoracle commented Jan 23, 2025

Hello,

I have tried to run the norne model on an NVIDIA A10 GPU, both with the binaries and the modules built from sources, and in both cases the log says the solver is not known.

Error: [/home/ubuntu/opm-simulators/opm/simulators/linalg/FlexibleSolver_impl.hpp:220] Properties: Solver gpubicgstab not known.
Simulation aborted as program threw an unexpected exception: [/home/ubuntu/opm-simulators/opm/simulators/linalg/FlexibleSolver_impl.hpp:220] Properties: Solver gpubicgstab not known.

@multitalentloes: When you say

Note that for the preconditioners to be found OPM must be built with CUDA available if you want to run on an Nvidia GPU_

does that mean we must specify to cmake how to find CUDA?

Thank you for your guidance.

@multitalentloes
Copy link
Contributor

Thank you @yanncaniouoracle for your interest in the GPU support of OPM.

The distributed binaries do indeed not support the gpubicgstab linear solver, I think this is the cause of the error you are seeing. CUDA will typically be found by itself when using cmake, but there are other default options that might interfere with compiling this particular linear solver. You can try compiling after running cmake with -DUSE_GPU_BRIDGE=OFF (or -DUSE_BDA_BRIDGE=OFF, depending on how new your OPM source code is).

After recompilation of the source code with proper cmake arguments this will hopefully work. If it does not then please reach out again here and I will help you further.

@yanncaniouoracle
Copy link

@multitalentloes Thanks for your reply.

Unfortunately the same error persists. I am using Ubuntu 22.04, NVIDIA driver version 565.57.01, CUDA version 12.7 and flow is the latest version from de repo (2025.04-pre). CMake says for both options that the manually added variables are not used.

CMake Warning:
  Manually-specified variables were not used by the project:

    USE_GPU_BRIDGE

Any other idea?

@multitalentloes
Copy link
Contributor

@yanncaniouoracle, can you provide the commit hash you are using from opm-common, opm-grid, and opm-simulators? Since there has been many changes to this part of the code during 2024 I can see what options I need to use to get it working on precisely the version you are on.

@yanncaniouoracle
Copy link

yanncaniouoracle commented Jan 29, 2025

@multitalentloes here are the hashes:

  • opm-common efbebfe159d41477aa2acffce9c6648a6b587294
  • opm-grid 3bc81d20df6fb0c5c9e31d728eac558c9d37a753
  • opm-simulators d2b272b5f54779a72923d090b0a6282ae56e241a

I guess most of the work is related to the opm-simulators build where I can see a few GPU related options in the CMakeLists.txt.

@multitalentloes
Copy link
Contributor

@yanncaniouoracle I have checked those specific commits on both my machine (AMD GPU) and that of a coworker (Nvidia GPU) without encountering this issue. I wrote this short script to ensure starting from a blank slate.

The warning that the variable was not used looks suspicious. It could also be that CUDA is not automatically found on your system and that you should maybe add some paths directly in ccmake, but I primarily think you should have not gotten that warning if the argument was provided correctly. See if you provided it in a similar way to that of the script.

It could maybe also be that there are other packages and libraries found on your system that leads our two machines down separate execution paths in the cmake script, though I cannot immediately tell what that would be.

@yanncaniouoracle
Copy link

Since I have been using a fresh VM, CUDA was not in the PATH environment variable. It is working now.

However, I have seen many deprecation warning during the build. Which version of CUDA would you recommend?

@multitalentloes
Copy link
Contributor

Great to hear that you got it working!

Being on the latest CUDA version (12.Something) is good for now.

If you want to read a bit more about the linear solver, and how to adjust the parameters of the GPU precondtioners for extra performance, you may read this blog post, all of our GPU preconditioners are described there. The warnings relate to the GPUILU0 preconditioner, which we might deprecate/remove in the future when CUDA updates comes along. I do not recommend using it anyways, as we have seen from a wide range of benchmarks that it is slower than OPMGPUILU0, and GPUDILU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants