Interested in learning more about how Calyx can work with generalized sparse matrix multiplication over semirings. #1959
Replies: 2 comments
-
Hi @michelp! Thank you for reaching out! GraphBLAS is super exciting and we were having preliminary conversations with our Cornell colleague Giulia Guidi about potential collaborations around building GraphBLAS accelerators using Calyx. There are a couple of different projects in the pipeline that are particularly well aligned so it might be worth chatting. Please feel free to reach out to me at [email protected] if you're interested in chatting about more about this and let us know how the "hello world" goes! CC @sampsyo |
Beta Was this translation helpful? Give feedback.
-
Yes, super exciting, @michelp! Thanks for reaching out. This kind of work would be extremely interesting to us. About where to start:
While we have attempted a few graph algorithms in Calyx a couple of times, it has been a while since we have seriously worked on this. It would be a good idea to start, probably, with an extremely naive and intentionally slow implementation just to get familiar with the IR and infrastructure… such a version of "hello world" might not be too terrible! You probably already found these, but the language tutorial is a good place to start for the very basic stuff. You might also be interested in the Python library for emitting Calyx, which can really cut down on the verbosity and make it easy to make stuff more parameterized. Please do let us know if any questions come up!! |
Beta Was this translation helpful? Give feedback.
-
Hello!
I'm one of the core contributors to the GraphBLAS API and its Python and PostgreSQL bindings. The GraphBLAS is a sparse graph analytics API that uses Linear Algebra to work with sparse matrices.
Before the GraphBLAS, graph algorithms are often theoretically described using linear algebra, but then encoded into a machine algorithm using a handwritten and specifically tuned approach for a given architecture. Moving to a new architecture or language requires re-writing and re-tuning the algorithm. By expressing graph algorithms and sequence of matrix operations in linear algebra, the task of optimizing memory access, parallelization, and hardware targeting is solved by the library, not the programmer. In a sense the GraphBLAS can be thought of as a high level language based on linear algebra for graph algorithms that "compiles" to specific memory and hardware architectures without changing the code.
Graphs are traversed using Sparse Matrix Multiplication and operations over edges are done with Semirings. The current state-of-the-art implementation SuiteSparse:GraphBLAS uses a JIT compiler to target different architectures, current various CPUs and CUDA GPUs, but the JIT is flexible and extensible, and I think it can be used to synthesize FPGA kernels for graph linear algebra.
FPGAs could be a big win in this space, they excel at sparse problems that require high bandwidth across problem spaces that defeat traditional von Neumann cache hierarchies. So I look forward to investigating Calyx more, and seeing where to start to try and compile a simple "hello world" like example to do a simple BFS across a graph. Any pointers would be appreciated! I did search around a bit and saw this interesting discussion on pipelining which I'm digging deeper into.
Thanks,
-Michel
Beta Was this translation helpful? Give feedback.
All reactions