-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HLO Canonicalizations Todo list #54
Comments
Is this beneficial? Pad will operate on a larger buffer as a result. |
I expect this to have little practical effect: reshape should be a metadata operation, I'm not sure it even affects the generated code. |
Yeah I think the question right now is that there's a bunch of these unnecessarily in the way of other ops (which hopefully do have practical impact canonicalizing together), and hopefully these small things would enable them (while in isolation indeed probs not be individually perf-inducing). I decided to started writing the immediate ones I saw out of the mlir file instead of the bigger ones to do down the line (like batching dots), since had mini IR's for. |
I started doing pad/mul as that is clearly beneficial by making mul smaller |
So in the code I saw the actual code was mul(reshape(pad(...))), which I split out into the reshape(pad) and mul(reshape). So I think we'll also need to get that to work end-to-end (I can start it though assuming you haven't). Ironically you can see I was lazy and didn't even rename the result ops (e.g. reshape(pad) was 175/176 and mul(pad) was 175/177 [in the real code it was using the reshape result]) |
I hypothesize so. In principle a pad of a broadcast should be fusable (e.g. a pad is just doing out[idx] = in bounds ? in[idx2] : pad, and the in[idx2] being a broadcast may get simplified). I suppose the reverse is also the case, but if they don't get fused doing the memset(x, 0) for the bigger buffers at once seems wise. Also like above moving the pads out may help downstream opts. |
|
actually jk the outer one is hard
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Do you have an example of this? There is a bunch of cases that are already supported https://github.com/EnzymeAD/Enzyme-JAX/blob/main/test/lit_tests/broadcastreduce.mlir |
|
Unfortunately not presently, but I'll post when it comes up again. |
|
|
|
Generalize pad propagations to work with an interstitial reshape
|
|
To mark which ones we see worth doing, are doing / need to do
cc @ivanradanov @ftynse
The text was updated successfully, but these errors were encountered: