- Create an OrtCustomOpDomain with the domain name used by the custom ops
- Create an OrtCustomOp structure for each op and add them to the OrtCustomOpDomain with OrtCustomOpDomain_Add
- Call OrtAddCustomOpDomain to add the custom domain of ops to the session options
See this for an example called MyCustomOp that uses the C++ helper API (onnxruntime_cxx_api.h).
You can also compile the custom ops into a shared library and use that to run a model via the C++ API. The same test file contains an example.
The source code for a sample custom op shared library containing two custom kernels is here.
See this for an example called testRegisterCustomOpsLibrary that uses the Python API
to register a shared library that contains custom op kernels.
Currently, the only supported Execution Providers (EPs) for custom ops registered via this approach are the
CUDA
and theCPU
EPs.
- Implement your kernel and schema (if required) using the OpKernel and OpSchema APIs (headers are in the include folder).
- Create a CustomRegistry object and register your kernel and schema with this registry.
- Register the custom registry with ONNXRuntime using RegisterCustomRegistry API.
See this for an example.
This is mostly meant for ops that are in the process of being proposed to ONNX. This way you don't have to wait for an approval from the ONNX team if the op is required in production today. See this for an example.