PyTorch to Calyx Lowering Path #1857
Replies: 3 comments 4 replies
-
Thanks @jiahanxie353 for summarizing the key ideas. Torch-mlir can be a good target in the PyTorch 1.0 era, but may not be that attractive after PyTorch 2.0 was released. Most of the facilities of torch-mlir are based on TorchScript, which has lots of limitations (as pointed out in this paper) and is no longer actively maintained by the PyTorch team. PyTorch 2.0 greatly enhances the compiler support by leveraging TorchDynamo as the frontend and torch.fx as the IR. Users can then plug in different backends using the As a result, in Allo, we chose to embrace torch.fx and minimize the dependency of external libraries like torch-mlir. Basically, we take in a PyTorch module, lower it to torch.fx, generate the corresponding Allo representation, and finally generate MLIR programs represented in linalg/affine. We can already support lowering some basic models from PyTorch. For example, lowering a BERT layer to MLIR. I think we can collaborate more on this lowering path, and we may have more flexibility on the backend selection. Either going through the AMC dialect or directly lowering from linalg/affine to Calyx can be fine. |
Beta Was this translation helpful? Give feedback.
-
Just had a meeting with @evanmwilliams, he's also interested in working on the PyTorch-Allo side. |
Beta Was this translation helpful? Give feedback.
-
FYI, PyTorch 2.0 got accepted by this year's ASPLOS, and here is the paper: |
Beta Was this translation helpful? Give feedback.
-
Hi folks,
I'd like to bring up the conversation regarding lowering PyTorch to Calyx in the CIRCT compiler, as mentioned in Theme 1 of this discussion.
After some research, there are two options here:
allo
(which is not open source for now);Option 1: Using Torch-MLIR
If we were to take this route, I believe we would be involved in both:
Option 2: Using Allo
Alternatively, we can use Allo as our frontend, which takes machine learning models in Python, and it can directly lower to MLIR dialects, such Linalg, Tensor, etc. Specifically, it can be lowered to the AMC dialect (which is also not open source), which can in turn be lowered to Calyx if we want.
My Thoughts on Different Options
Based on my current understanding of the big picture, I would suggest to use Allo as our frontend and support different dialect lowerings in CIRCT in the meantime.
Works involving PyTorch frontend could be complicated and a headache since they are in a big shift (and PyTorch 2.0 just got released), and since PyTorch has a complete compilation flow of its own, I don't think they are putting a lot of effort in Torch-MLIR. Using Allo has a great advantage over the former.
After talking with @chhzh123 , I think we can add our support for the PyTorch-to-MLIR path in the Allo frontend. Although they have very comprehensive support for the Python-to-MLIR path, the PyTorch-to-MLIR path is still under development.
Once the PyTorch-to-MLIR path is complete in the Allo frontend, we can work on lowering from different dialects to Calyx in CIRCT. I mean although they have passes to lower from different dialects to AMC and we can simply just create a
lower-amc-to-calyx
pass, it might be constraining ourselves to relying on AMC too much. On the other hand, we can add comprehensive and reusable support to lower from different dialects (likeloopschedule
,scf
) tocalyx
in CIRCT as we've been working on.Please let me know what you think! @sampsyo @rachitnigam @andrewb1999
Beta Was this translation helpful? Give feedback.
All reactions