Enable end2end affine-to-neura lowering#53
Conversation
|
Sry, some errors occurred when I try to transform Will fix it soon. |
The problem arises from the
|
Thanks @ShangkunLi, I didn't handle such case as GPT/Gemini told me MLIR has the rule that all basic block would only have arguments as live-ins (rather than directly use previously existing variable in other blocks), but it seems not always correct. Do you wanna fix this in this PR or later? I didn't see the |
Filed an issue #54. I may try to fix this in the next pr. For this pr, I just tested these lowering patterns. |
tancheng
left a comment
There was a problem hiding this comment.
Thanks for the PR, let's wait for the pass on github action before merging :-)
Enable end2end affine-to-neura lowering
In this pr:
memrefandbuiltindialectsneura.load_indexed/store_indexedoperation formemreflike memory accessNow, we can write code in
cppand lower it toaffinedialect usingpolygeist. And more high-level transforms can be implemented inaffinelevel, like polyhedral-based optimization, loop-unroll/fusion/fission/interchange/tiling/vectorize, etc.