-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about "krnl.global"() #3036
Comments
No, it is not a bug, and as a matter of fact, we lower a few kernel operations late. If you wanted to have a pure mlir dialect, you would have to add a pass that adds the lowering of these krnl ops to mlir equivalent ones. You can search for the rules that lower these specific ops, and add a pass. We would have no issue if you wanted to upstream that code, and maybe create a new target (e.g. Hope this helps. We can probably help you a bit with specific issues you may have if you start this effort and want to upstream it. I suspect other folks might be interested in that too, and we are happy to help more folks getting onboard of onnx-mlir. |
However, using the -EmitMLIR option might easily lead others to misunderstand that it generates a purely MLIR dialect. Also, I’m not yet sure how many krnl operations are present in the MLIR generated by the -EmitMLIR option? |
I used onnx-mlir with the --EmitMLIR parameter to generate mlir build-in dialect op from a model file. However, the generated .mlir file still contains operations like "krnl.global"(). Isn't this a krnl op rather than a built-in MLIR dialect op? Shouldn't this be considered a bug?
eg:
%0 = "krnl.global"() {name = "constant_0", shape = [], value = dense<8.000000e+00> : tensor} : () -> memref
The text was updated successfully, but these errors were encountered: