Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Solver Enabled Via Env Variable #207

Open
wants to merge 3 commits into
base: DKMQ
Choose a base branch
from

Conversation

SoundsSerious
Copy link
Contributor

Created a PyNite Solvers module to handle importing of numpy / scipy solvers or the creation of torch based GPU solvers with the same names

Torch GPU Solver is only used when the env-var PYNITE_GPU is "True", which will alert the user that it isn't set with a "GPU not available: PYNITE_GPU enviornonmental variable not set to True"

If torch isn't installed or otherwise has a configuration error the user will get a corresponding error message "GPU not available: "

The only other thing I would add to this is a GPU field in the extra_require section of setup.py for torch

The testing suite passes in ~40s vs ~60s on a nvidia 4070 on my machine.

@JWock82
Copy link
Owner

JWock82 commented Sep 7, 2024

This is an interesting thought. From what I've gathered, GPU calculations are much faster for dense matrices, but don't handle sparse matrices. I see your code converts the sparse matrices back to dense matrices prior to the GPU solution. That is a bit misleading to the user if they request a sparse solution and get a dense GPU solution instead.

Is GPU solution of a dense matrix faster than a CPU solution of a sparse matrix? I'm not sure. Most stiffness matrices are sparse (lots of zero terms except around the diagonal of the matrix). I think your code is going to be faster for small models, and possibly slower for large models.

It seems that your code is making the GPU solution the default. I think it would be better to leave the existing solvers as the default, with the GPU solver as a third option. If we play with it and feel it is faster all-around we could later change it to be the default.

I don't want to complain about code you shared freely, but it'd be nice to have more comments in the code. I'm not familiar with pytorch, so I had to use ChatGPT to help me dissect what was going on here. I appreciate you sharing this idea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants