How to define a custom operator for a SHAP-explainable decision tree model #21776
tadhgpearson
started this conversation in
Ideas / Feature Requests
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Thanks for creating such a great, portable ML runtime. We're making great use of this in our product, but we'd love to have better explainability of the model output for our customers.
We have a DecisionTreeModel which we export from SparkML and infer at runtime with ONNX. We'd like to add explainability to this model. Given #1176 it seems that code supporting SHAP explainabilty of models should not be added to the core ONNX runtime.
Would the most suitable way to do this be by defining a custom operator for example a ExplainableDecisionTreeModel operator?
If we did this, I guess when we do conversion in ONNX ML tools for this model, the ExplainableDecisionTreeModel operator would be the same as the existing DecisionTreeModel operator, but with a different shape to incorporate the SHAP explanation vector. At runtime, this would then need to be executed in custom operator code that explains the model as well as generating the prediction value. Looking in tree_ensemble_common.h, to explain the tree it seems necessary to have access to the tree attributes in OpKernelInfo. Looking at the custom operators documentation, it's not clear how a custom operator can get access to this. If this is the suggested approach to take, how can my custom operator access the OpKernelInfo?
Beta Was this translation helpful? Give feedback.
All reactions