Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about "krnl.global"() #3036

Open
yuucyf opened this issue Dec 23, 2024 · 2 comments
Open

Question about "krnl.global"() #3036

yuucyf opened this issue Dec 23, 2024 · 2 comments

Comments

@yuucyf
Copy link

yuucyf commented Dec 23, 2024

I used onnx-mlir with the --EmitMLIR parameter to generate mlir build-in dialect op from a model file. However, the generated .mlir file still contains operations like "krnl.global"(). Isn't this a krnl op rather than a built-in MLIR dialect op? Shouldn't this be considered a bug?

eg:
%0 = "krnl.global"() {name = "constant_0", shape = [], value = dense<8.000000e+00> : tensor} : () -> memref

@AlexandreEichenberger
Copy link
Collaborator

AlexandreEichenberger commented Dec 23, 2024

No, it is not a bug, and as a matter of fact, we lower a few kernel operations late. If you wanted to have a pure mlir dialect, you would have to add a pass that adds the lowering of these krnl ops to mlir equivalent ones.

You can search for the rules that lower these specific ops, and add a pass. We would have no issue if you wanted to upstream that code, and maybe create a new target (e.g. -EmitOnlyMLIR or an option like -only-mlir that would activate that pass when the -EmitMLIR target is on.

Hope this helps. We can probably help you a bit with specific issues you may have if you start this effort and want to upstream it. I suspect other folks might be interested in that too, and we are happy to help more folks getting onboard of onnx-mlir.

@yuucyf
Copy link
Author

yuucyf commented Dec 25, 2024

No, it is not a bug, and as a matter of fact, we lower a few kernel operations late. If you wanted to have a pure mlir dialect, you would have to add a pass that adds the lowering of these krnl ops to mlir equivalent ones.

You can search for the rules that lower these specific ops, and add a pass. We would have no issue if you wanted to upstream that code, and maybe create a new target (e.g. -EmitOnlyMLIR or an option like -only-mlir that would activate that pass when the -EmitMLIR target is on.

Hope this helps. We can probably help you a bit with specific issues you may have if you start this effort and want to upstream it. I suspect other folks might be interested in that too, and we are happy to help more folks getting onboard of onnx-mlir.

However, using the -EmitMLIR option might easily lead others to misunderstand that it generates a purely MLIR dialect. Also, I’m not yet sure how many krnl operations are present in the MLIR generated by the -EmitMLIR option?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants