Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparsity support in pytensor #1127

Open
Ch0ronomato opened this issue Dec 15, 2024 · 5 comments
Open

Sparsity support in pytensor #1127

Ch0ronomato opened this issue Dec 15, 2024 · 5 comments
Labels
enhancement New feature or request gradients linalg Linear algebra

Comments

@Ch0ronomato
Copy link
Contributor

Description

I'm investing implementing ALS in pytensor which is usually implemented with sparsity constructs (see implicit for reference). I quickly looked around and saw this older thread where someone asked for sparsity support. @jessegrabowski gave a first pass answer, but mentioned the support is subpar. Opening this to track any enhancements we could bring.

@Ch0ronomato Ch0ronomato added enhancement New feature or request gradients linalg Linear algebra labels Dec 15, 2024
@jessegrabowski
Copy link
Member

Is this specifically about implementing solve with sparse inputs, or do you have other Ops in mind?

@Ch0ronomato
Copy link
Contributor Author

The original issue seems to be just solve, I imagine that's good enough to start

@jessegrabowski
Copy link
Member

jessegrabowski commented Dec 15, 2024

So for solve it's easy enough to wrap sparse.linalg.spsolve for the C backend. We need gradients, but I found a JAX implementation here that we can copy, so that should be easy enough.

For the numba backend we need to write our own overrides, as I did for solve_discrete_are, for example. spsolve calls out to superlu, which I think we can hook into the same way we do for the other cLAPACK functions. There's also another optional package umfpack which scipy appears to prefer if it's available, but it would be a new dependency.

Finally for the Torch backend I honestly have no idea. It looks like torch has sparse support as well as an spsolve implementation, so it might be straight forward?

@Ch0ronomato
Copy link
Contributor Author

Great, thanks for the plan!

Out of curiosity, what could we do? I'm not aware of what general sparsity support would look like beyond making sure things can use csr and csc tensors, which i think pytensor already has sparse for that

@jessegrabowski
Copy link
Member

jessegrabowski commented Dec 16, 2024

Some thoughts:

  1. Agreement/harmonization on the status of the SparseMatrix primitive. I thought there was an issue or discussion about this, but for the life of me I can't find it @ricardoV94 .
  2. Support for as full a suite of linear algebra operations on sparse matrices as possible. This page has the list of what I would consider to be possible. Interesting ones would be spsolve, spsolve_triangular, eigs, svds, and kron.
  3. Better rewrites/detection for when sparse matrices are involved in Ops, for example BUG: .dot() not producing the same as sparse.basic.dot() #321
  4. Support for batch dimensions, see Allow sparse variables to have dummy dimensions to the left? #839
  5. Support for symbolic sparse jacobians from pytensor.gradient.jacobian, see https://github.com/mfschubert/sparsejac

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request gradients linalg Linear algebra
Projects
None yet
Development

No branches or pull requests

2 participants