Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Out-Tree EP feature #21450

Draft
wants to merge 79 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
0e6a80c
opaque pointer for graph
jslhcl Jul 17, 2024
c30a639
ORT C API RegisterOrtExecutionProviderLibrary work
jslhcl Jul 23, 2024
7bfe57e
ORT C-API SessionOptionsAppendOrtExecutionProvider work
jslhcl Jul 23, 2024
8e7d28d
Test Relu with compile based EP, build work, runtime error of loading…
jslhcl Jul 26, 2024
808bfc3
prototype works with hardcode node_compute_info's index in ExecutionP…
jslhcl Jul 29, 2024
49e396c
prototype works without hardcode
jslhcl Jul 29, 2024
e790105
fix comments for Compile function
jslhcl Jul 31, 2024
92f529d
add provider_factory_adapter.h
jslhcl Aug 1, 2024
3d83ed1
fix crash after introducing kernel based EP
jslhcl Aug 5, 2024
e29499a
kernel based EP work with type constraint check commented out
jslhcl Aug 6, 2024
f3678c4
add kernel type constraints from out tree EP
jslhcl Aug 7, 2024
ac5ae0a
add API ReleaseOrtTypeConstraints
jslhcl Aug 7, 2024
0cc78e8
introduce qnn ep
jslhcl Aug 12, 2024
740a687
more graph/node C API
jslhcl Aug 13, 2024
dad6397
stream support
jslhcl Aug 15, 2024
94e9cf7
support data transfer and OrtDevice in out tree EP API
jslhcl Aug 16, 2024
8698517
change compile return type from void to OrtStatusPtr
jslhcl Aug 20, 2024
3d5d2bf
add TensorRT dependency in tensorRT EP's CMakeLists.txt
jslhcl Aug 20, 2024
1f10c28
Add extra parameters in OrtExecutionProvider to avoid capture variabl…
jslhcl Aug 22, 2024
5e46d0f
add OrtGraph_SerializeToArray
jslhcl Aug 23, 2024
85c168d
finish Compile function
jslhcl Aug 24, 2024
7bdb36a
add override function implementation and cudart dependency for tensorrt
jslhcl Aug 26, 2024
7d915b7
add outOfTree tensorrt ep.1 (#21830)
guyang3532 Aug 27, 2024
4aea94b
GetSupportedList
jslhcl Aug 28, 2024
865a17f
GetSubGraph and TensorrtExecutionProviderInfo
jslhcl Aug 29, 2024
2811541
Add simple CUDA allocators for TRT EP (#21901)
chilo-ms Aug 29, 2024
c97b19f
add constructor for tensorrt ep and refine GetCapability (#21914)
guyang3532 Aug 29, 2024
36f97b5
relu can work on out tree TRT now
jslhcl Aug 29, 2024
2fc7aac
rebuild graph proto from scratch with the information needed from gra…
jslhcl Aug 31, 2024
4ad6993
complete the GetCapability (#21956)
guyang3532 Sep 2, 2024
53c736f
Chi's fix and reorder ep for registering shared resource
jslhcl Sep 4, 2024
5fcb972
complete the GetSubGraph (#21998)
guyang3532 Sep 5, 2024
c3bb437
run resnet18v1_7, crash on GetSubGraph()
jslhcl Sep 6, 2024
d1c657c
Merge branch 'leca/outOfTreeEP' of https://github.com/microsoft/onnxr…
jslhcl Sep 6, 2024
3efac97
resnet18-v1-7 works for TRT EP, with next_nodes_list assignment comme…
jslhcl Sep 6, 2024
766fec9
test cases for decoder and fast_rcnn, delete dynamic_cast in ShouldPo…
jslhcl Sep 9, 2024
ea2465c
add tensorrt home in CMakeLists, add trt and CUDA ep for test, change…
jslhcl Sep 11, 2024
76a9305
[WIP, DONT REVIEW] add initializer to graph proto (#22085)
jslhcl Sep 18, 2024
330cdb6
use parameter ExecutionOrder::PRIORITY_BASED for GraphViewerToProto()…
jslhcl Sep 19, 2024
6fd50f0
can create session with out tree trt ep now. Error:Name:'tensorrtEp_T…
jslhcl Sep 23, 2024
681585f
make trt_node_name_with_precision_ from string to map, to capture the…
jslhcl Sep 23, 2024
7db20cb
fix redundant inputs and outputs in GetSubgraph (#22201)
guyang3532 Sep 24, 2024
ff782e0
RunTinyYolov3()
jslhcl Sep 25, 2024
1d7b2df
fix bugs for run tinyYolo (#22233)
guyang3532 Sep 26, 2024
a407944
sample code to separate graph C API to different files
jslhcl Sep 26, 2024
f871b25
new test control_flow, error: ErrorMessage:Failed to find kernel for …
jslhcl Oct 2, 2024
e84f00c
control flow model works
jslhcl Oct 3, 2024
5b2de22
API refactor
jslhcl Oct 7, 2024
b1f8e2a
Python API
jslhcl Oct 14, 2024
7acaaab
fix memory leak (#22444)
guyang3532 Oct 15, 2024
d150a03
refactor all functions in onnxruntime_c_api_ep with status as return …
guyang3532 Oct 17, 2024
da5b6eb
resolve comments
jslhcl Oct 18, 2024
d280e59
add documents for all functions in c_api_ep (#22502)
guyang3532 Oct 18, 2024
cbe98e7
fix comments
jslhcl Oct 19, 2024
1529059
fix memory leak (#22522)
guyang3532 Oct 21, 2024
fa549f8
add mutex to plugin trt ep (#22581)
guyang3532 Oct 24, 2024
a28ad38
use std::mutex instead of OrtMutex and fix build error in Windows
jslhcl Oct 24, 2024
aa49805
openvino
jslhcl Oct 26, 2024
bc65613
openvino, GetCapability almost ready
jslhcl Oct 31, 2024
a1a3eea
openvino GetCapacity() is done. UnregisterPluginExecutionProviderLibrary
jslhcl Nov 1, 2024
0fe5f01
refine compile of openvino ep (#22689)
guyang3532 Nov 1, 2024
6bae1b9
Add utility files (#22650)
chilo-ms Nov 1, 2024
ab75d98
OpenVino, compile() is done
jslhcl Nov 2, 2024
c5510f2
Merge branch 'leca/outOfTreeEP' of https://github.com/microsoft/onnxr…
jslhcl Nov 2, 2024
08e3f20
Add unit test for TRT EP plugin (#22548)
chilo-ms Nov 2, 2024
b0b3123
add test for openvino plugin ep and fix bugs (#22734)
guyang3532 Nov 5, 2024
9dbb0b1
add missing mutex to plugin trt ep
chilo-ms Nov 6, 2024
5a59803
merge code
jslhcl Nov 6, 2024
999e7fd
Merge branch 'leca/outOfTreeEP' of https://github.com/microsoft/onnxr…
jslhcl Nov 6, 2024
084f735
fix bugs (#22744)
guyang3532 Nov 6, 2024
2b1cfdf
relu and resnet works in OpenVINO plugin
jslhcl Nov 7, 2024
e337d8f
Add OrtGraphApis::OrtNode_GetAttributeStrWithSize to handle case wher…
chilo-ms Nov 13, 2024
afe92e1
Make EP plugin be able to create and update EP Context graph (#22740)
chilo-ms Nov 13, 2024
63f8774
[TensorRT EP Plugin] use new graph api for ep context model generation
chilo-ms Nov 14, 2024
bf359a1
use cuda's preferred allocator for plugin trt and builtin cuda combin…
jslhcl Nov 16, 2024
c267ea5
[TensorRT EP Plugin] Add cuda::Impl_Cast (#22908)
chilo-ms Nov 20, 2024
72afdc4
fix build/compiler error for nvcc 11.8
chilo-ms Nov 22, 2024
6822206
Do not expose OrtGraph
jslhcl Dec 3, 2024
c8ddc73
initial commit for Graph C++ API
jslhcl Dec 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions include/onnxruntime/core/framework/ort_type_constraints.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
// Copyright (c) Microsoft Corporation. All rights reserved.

Check warning

Code scanning / lintrunner

CLANGFORMAT/format Warning

See https://clang.llvm.org/docs/ClangFormat.html.
Run lintrunner -a to apply this patch.
// Licensed under the MIT License.

#pragma once
#include "core/session/onnxruntime_c_api.h"
#include <unordered_map>

Check warning on line 6 in include/onnxruntime/core/framework/ort_type_constraints.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Found C++ system header after other header. Should be: ort_type_constraints.h, c system, c++ system, other. [build/include_order] [4] Raw Output: include/onnxruntime/core/framework/ort_type_constraints.h:6: Found C++ system header after other header. Should be: ort_type_constraints.h, c system, c++ system, other. [build/include_order] [4]
#include <string>

Check warning on line 7 in include/onnxruntime/core/framework/ort_type_constraints.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Found C++ system header after other header. Should be: ort_type_constraints.h, c system, c++ system, other. [build/include_order] [4] Raw Output: include/onnxruntime/core/framework/ort_type_constraints.h:7: Found C++ system header after other header. Should be: ort_type_constraints.h, c system, c++ system, other. [build/include_order] [4]
#include <set>

Check warning on line 8 in include/onnxruntime/core/framework/ort_type_constraints.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Found C++ system header after other header. Should be: ort_type_constraints.h, c system, c++ system, other. [build/include_order] [4] Raw Output: include/onnxruntime/core/framework/ort_type_constraints.h:8: Found C++ system header after other header. Should be: ort_type_constraints.h, c system, c++ system, other. [build/include_order] [4]

struct OrtTypeConstraints {
bool AddTypeConstraint(const char* type_symbol, ONNXTensorElementDataType type);
inline const std::unordered_map<std::string, std::set<ONNXTensorElementDataType>>& GetTypeConstraints() const { return type_constraints_; };

Check warning on line 12 in include/onnxruntime/core/framework/ort_type_constraints.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Lines should be <= 120 characters long [whitespace/line_length] [2] Raw Output: include/onnxruntime/core/framework/ort_type_constraints.h:12: Lines should be <= 120 characters long [whitespace/line_length] [2]

Check warning on line 12 in include/onnxruntime/core/framework/ort_type_constraints.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 You don't need a ; after a } [readability/braces] [4] Raw Output: include/onnxruntime/core/framework/ort_type_constraints.h:12: You don't need a ; after a } [readability/braces] [4]
private:

Check warning on line 13 in include/onnxruntime/core/framework/ort_type_constraints.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 private: should be indented +1 space inside struct OrtTypeConstraints [whitespace/indent] [3] Raw Output: include/onnxruntime/core/framework/ort_type_constraints.h:13: private: should be indented +1 space inside struct OrtTypeConstraints [whitespace/indent] [3]
std::unordered_map<std::string, std::set<ONNXTensorElementDataType>> type_constraints_;
};
5 changes: 5 additions & 0 deletions include/onnxruntime/core/session/environment.h
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,10 @@
*/
Status CreateAndRegisterAllocatorV2(const std::string& provider_type, const OrtMemoryInfo& mem_info, const std::unordered_map<std::string, std::string>& options, const OrtArenaCfg* arena_cfg = nullptr);

void InsertCustomEp(const char* ep_name, OrtExecutionProviderFactory* ep_factory);
Copy link
Contributor

@skottmckay skottmckay Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given SessionOptionsAppendOrtExecutionProvider allows the user to register the instance of the EP, when do we need this factory? #Resolved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is another C API RegisterOrtExecutionProviderLibrary which will load the shared library, create plugin EP factory and save it in the Environment.

Please see the implementation of RegisterOrtExecutionProviderLibrary and the usage in test.cpp as examples

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to a new Name. Hope it is more clear now.


OrtExecutionProviderFactory* GetOrtExecutionProviderFactory(const std::string& ep_name);

private:
ORT_DISALLOW_COPY_ASSIGNMENT_AND_MOVE(Environment);
Status Initialize(std::unique_ptr<logging::LoggingManager> logging_manager,
Expand All @@ -99,5 +103,6 @@
std::unique_ptr<onnxruntime::concurrency::ThreadPool> inter_op_thread_pool_;
bool create_global_thread_pools_{false};
std::vector<AllocatorPtr> shared_allocators_;
std::unordered_map<std::string, std::unique_ptr<OrtExecutionProviderFactory>> custom_ep_factories_;

Check warning on line 106 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Add #include <string> for string [build/include_what_you_use] [4] Raw Output: include/onnxruntime/core/session/environment.h:106: Add #include <string> for string [build/include_what_you_use] [4]

Check warning on line 106 in include/onnxruntime/core/session/environment.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Add #include <unordered_map> for unordered_map<> [build/include_what_you_use] [4] Raw Output: include/onnxruntime/core/session/environment.h:106: Add #include <unordered_map> for unordered_map<> [build/include_what_you_use] [4]
};
} // namespace onnxruntime
99 changes: 98 additions & 1 deletion include/onnxruntime/core/session/onnxruntime_c_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -304,6 +304,13 @@
ORT_RUNTIME_CLASS(OpAttr);
ORT_RUNTIME_CLASS(Logger);
ORT_RUNTIME_CLASS(ShapeInferContext);
ORT_RUNTIME_CLASS(ExecutionProvider);
ORT_RUNTIME_CLASS(ExecutionProviderFactory);
ORT_RUNTIME_CLASS(Node);
ORT_RUNTIME_CLASS(GraphViewer);
ORT_RUNTIME_CLASS(KernelRegistry);
ORT_RUNTIME_CLASS(TypeConstraints);
ORT_RUNTIME_CLASS(NodeUnit);

#ifdef _WIN32
typedef _Return_type_success_(return == 0) OrtStatus* OrtStatusPtr;
Expand Down Expand Up @@ -689,6 +696,67 @@
*/
ORT_EXPORT const OrtApiBase* ORT_API_CALL OrtGetApiBase(void) NO_EXCEPTION;

typedef struct OrtMetaDef {
const char* name;
const char* domain;
int since_version;

const char** inputs;
size_t input_len;
const char** outputs;
size_t output_len;
const char** constant_initializers;
size_t initializer_len;

const char* doc_string;
} OrtMetaDef;

typedef struct OrtIndexedSubGraph {
OrtMetaDef* meta_def; // TODO(leca): how to define a nested structure pointer?

Check warning on line 715 in include/onnxruntime/core/session/onnxruntime_c_api.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 At least two spaces is best between code and comments [whitespace/comments] [2] Raw Output: include/onnxruntime/core/session/onnxruntime_c_api.h:715: At least two spaces is best between code and comments [whitespace/comments] [2]
Copy link
Contributor

@adrianlizarraga adrianlizarraga Jul 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this have to be a pointer to an OrtMetaDef? It may be simpler if this meta_def is contained by value instead. #Resolved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks we will check the pointer is null or not to distinguish between single node mode and fused node mode (See base class IExecutionProvider::GetCapability() which does not set this pointer and TryAssignSingleNode() which will check this pointer)

size_t* node_index;
size_t node_index_len;
} OrtIndexedSubGraph;

typedef struct OrtComputeContext {
void*(ORT_API_CALL* AllocateFunc)(void*, size_t, size_t);
void(ORT_API_CALL* DestroyFunc)(void*, void*);
void* allocator_handle;
const char* node_name;
} OrtComputeContext;

typedef struct OrtNodeComputeInfo {
int(ORT_API_CALL* CreateFunctionStateFunc)(OrtComputeContext*, void**);

Check warning on line 728 in include/onnxruntime/core/session/onnxruntime_c_api.h

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Using deprecated casting style. Use static_cast<int>(...) instead [readability/casting] [4] Raw Output: include/onnxruntime/core/session/onnxruntime_c_api.h:728: Using deprecated casting style. Use static_cast<int>(...) instead [readability/casting] [4]
OrtStatusPtr(ORT_API_CALL* ComputeFunc)(void*, const OrtApi*, OrtKernelContext*);
void(ORT_API_CALL* DestroyFunctionStateFunc)(void*);
} OrtNodeComputeInfo;

typedef struct OrtExecutionProvider {
#ifdef __cplusplus
OrtExecutionProvider() : GetCapability{nullptr}, Compile{nullptr}, RegisterKernels{nullptr} {}
#endif
void(ORT_API_CALL* GetCapability)(const OrtExecutionProvider* this_, const OrtGraphViewer* graph, size_t* cnt, OrtIndexedSubGraph***);
void(ORT_API_CALL* Compile)(OrtExecutionProvider* this_, const OrtGraphViewer** graph, const OrtNode** node, size_t cnt, OrtNodeComputeInfo** node_compute_info);
void(ORT_API_CALL* RegisterKernels)(OrtKernelRegistry* kernel_registry);
const char* type;
} OrtExecutionProvider;

typedef struct OrtExecutionProviderFactory {
OrtExecutionProvider*(ORT_API_CALL* CreateExecutionProvider)(OrtExecutionProviderFactory* this_, const char* const* ep_option_keys, const char* const* ep_option_values, size_t option_size);
} OrtExecutionProviderFactory;

typedef struct OrtNodeUnit {
enum Type {
SingleNode,
QDQGroup,
} type;
OrtNode** dq_nodes;
size_t dq_nodes_len;
OrtNode** q_nodes;
size_t q_nodes_len;
OrtNode* target_node;
size_t input_edge_count;
} OrtNodeUnit;

/** \brief Thread work loop function
*
* Onnxruntime will provide the working loop on custom thread creation
Expand Down Expand Up @@ -4665,7 +4733,36 @@
_In_reads_(num_external_initializer_files) char* const* external_initializer_file_buffer_array,
_In_reads_(num_external_initializer_files) const size_t* external_initializer_file_lengths,
size_t num_external_initializer_files);
};

ORT_API2_STATUS(RegisterOrtExecutionProviderLibrary, _In_ const ORTCHAR_T* lib_path, _In_ OrtEnv* env, _In_ const char* ep_name);

ORT_API2_STATUS(SessionOptionsAppendOrtExecutionProvider, _In_ OrtSessionOptions* options, _In_ const char* ep_name, _In_ OrtEnv* env,
_In_reads_(num_keys) const char* const* provider_options_keys, _In_reads_(num_keys) const char* const* provider_options_values, _In_ size_t num_keys);

ORT_API2_STATUS(OrtGraph_IsConstantInitializer, const OrtGraphViewer* graph, const char* name, bool check_outer_scope, _Out_ bool* ret);

ORT_API2_STATUS(OrtGraph_GetNodesIndexInTopologicalOrder, const OrtGraphViewer* graph, _Out_ size_t* len, _Out_ const size_t** nodes_index_in_topological_order);

ORT_API2_STATUS(OrtGraph_GetOrtNode, const OrtGraphViewer* graph, size_t node_index, _Outptr_ const OrtNode** node);

ORT_API2_STATUS(OrtNode_GetOpType, const OrtNode* node, _Out_ const char** op_type);

ORT_API2_STATUS(OrtNode_GetInputSize, const OrtNode* node, _Out_ size_t* input_size);

ORT_API2_STATUS(OrtNode_GetIthInputName, const OrtNode* node, size_t i, _Out_ const char** ith_input_name);

ORT_API2_STATUS(OrtNode_GetOutputSize, const OrtNode* node, _Out_ size_t* output_size);

ORT_API2_STATUS(OrtNode_GetIthOutputName, const OrtNode* node, size_t i, _Out_ const char** ith_output_name);

ORT_API2_STATUS(OrtKernelRegistry_RegisterKernel, OrtKernelRegistry* kernel_registry, OrtCustomOp* custom_op, OrtTypeConstraints* type_constraints);
Copy link
Contributor

@skottmckay skottmckay Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: slightly more readable if this is after the type constraint functions given it takes OrtTypeConstaints as an input. #Resolved


ORT_API2_STATUS(CreateOrtTypeConstraints, _Outptr_ OrtTypeConstraints** type_constraints);

ORT_API2_STATUS(AddTypeConstraint, _In_ OrtTypeConstraints* type_constraints, _In_ const char* type_symbol, ONNXTensorElementDataType type);

ORT_API2_STATUS(ReleaseOrtTypeConstraints, _In_ OrtTypeConstraints* type_constraints);
}; // struct OrtApi

/*
* Steps to use a custom op:
Expand Down
14 changes: 14 additions & 0 deletions onnxruntime/core/framework/ort_type_constraints.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Copyright (c) Microsoft Corporation. All rights reserved.

Check warning

Code scanning / lintrunner

CLANGFORMAT/format Warning

See https://clang.llvm.org/docs/ClangFormat.html.
Run lintrunner -a to apply this patch.
// Licensed under the MIT License.

#include "core/framework/ort_type_constraints.h"

bool OrtTypeConstraints::AddTypeConstraint(const char* type_symbol, ONNXTensorElementDataType type) {
std::unordered_map<std::string, std::set<ONNXTensorElementDataType>>::iterator iter = type_constraints_.find(type_symbol);
if (iter == type_constraints_.end()) {
std::set<ONNXTensorElementDataType> types{type};
type_constraints_[type_symbol] = types;
return true;
}
return (iter->second).insert(type).second;
}
93 changes: 93 additions & 0 deletions onnxruntime/core/framework/provider_adapter.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
// Copyright (c) Microsoft Corporation. All rights reserved.

Check warning

Code scanning / lintrunner

CLANGFORMAT/format Warning

See https://clang.llvm.org/docs/ClangFormat.html.
Run lintrunner -a to apply this patch.
// Licensed under the MIT License.

#pragma once
#include "core/session/onnxruntime_c_api.h"
#include "core/framework/compute_capability.h"

namespace onnxruntime {
class ExecutionProviderAdapter : public IExecutionProvider {
public:
ExecutionProviderAdapter(OrtExecutionProvider* ep) : IExecutionProvider(ep->type), ep_impl_(ep) {
if (ep_impl_->RegisterKernels) {
kernel_registry_ = std::make_shared<KernelRegistry>();
ep_impl_->RegisterKernels(reinterpret_cast<OrtKernelRegistry*>(kernel_registry_.get()));
}
}
virtual std::vector<std::unique_ptr<ComputeCapability>> GetCapability(const GraphViewer& graph_viewer, const IKernelLookup& kernel_lookup) const override {
size_t cnt = 0;
OrtIndexedSubGraph** indexed_subgraph = nullptr;
if (ep_impl_->GetCapability) ep_impl_->GetCapability(ep_impl_, reinterpret_cast<const OrtGraphViewer*>(&graph_viewer), &cnt, &indexed_subgraph);

if (cnt == 0) return IExecutionProvider::GetCapability(graph_viewer, kernel_lookup);

std::vector<std::unique_ptr<ComputeCapability>> ret;
for (size_t i = 0; i < cnt; i++) {
std::unique_ptr<IndexedSubGraph> sb = std::make_unique<IndexedSubGraph>();
sb->nodes.reserve(indexed_subgraph[i]->node_index_len);
for (size_t j = 0; j < indexed_subgraph[i]->node_index_len; j++) sb->nodes.push_back((indexed_subgraph[i]->node_index)[j]);
if (indexed_subgraph[i]->meta_def != nullptr) {
std::unique_ptr<IndexedSubGraph::MetaDef> meta_def = std::make_unique<IndexedSubGraph::MetaDef>();
meta_def->name = indexed_subgraph[i]->meta_def->name ? indexed_subgraph[i]->meta_def->name : "";
meta_def->doc_string = indexed_subgraph[i]->meta_def->doc_string ? indexed_subgraph[i]->meta_def->doc_string : "";
meta_def->domain = indexed_subgraph[i]->meta_def->domain ? indexed_subgraph[i]->meta_def->domain : "";
meta_def->since_version = indexed_subgraph[i]->meta_def->since_version;

meta_def->inputs.reserve(indexed_subgraph[i]->meta_def->input_len);
for (size_t j = 0; j < indexed_subgraph[i]->meta_def->input_len; j++) meta_def->inputs.push_back(indexed_subgraph[i]->meta_def->inputs[j]);

meta_def->outputs.reserve(indexed_subgraph[i]->meta_def->output_len);
for (size_t j = 0; j < indexed_subgraph[i]->meta_def->output_len; j++) meta_def->outputs.push_back(indexed_subgraph[i]->meta_def->outputs[j]);

meta_def->constant_initializers.reserve(indexed_subgraph[i]->meta_def->initializer_len);
for (size_t j = 0; j < indexed_subgraph[i]->meta_def->initializer_len; j++) meta_def->constant_initializers.push_back(indexed_subgraph[i]->meta_def->constant_initializers[j]);

sb->SetMetaDef(std::move(meta_def));
}

ret.push_back(std::make_unique<ComputeCapability>(std::move(sb)));
}
return ret;
}

virtual common::Status Compile(const std::vector<FusedNodeAndGraph>& fused_nodes_and_graphs, std::vector<NodeComputeInfo>& node_compute_funcs) override {
std::vector<const OrtGraphViewer*> ortGraphs;
std::vector<const OrtNode*> ortNodes;
for (auto& fused_node_graph : fused_nodes_and_graphs) {
const GraphViewer& graph_viewer = fused_node_graph.filtered_graph;
const Node& fused_node = fused_node_graph.fused_node;
ortGraphs.push_back(reinterpret_cast<const OrtGraphViewer*>(&graph_viewer));
ortNodes.push_back(reinterpret_cast<const OrtNode*>(&fused_node));
}
size_t count = fused_nodes_and_graphs.size();
std::vector<OrtNodeComputeInfo> cache;
cache.resize(count);
OrtNodeComputeInfo* cache_data = cache.data();
ep_impl_->Compile(ep_impl_, ortGraphs.data(), ortNodes.data(), count, &cache_data);
node_compute_funcs.reserve(count);
for (size_t i = 0; i < count; i++) {
NodeComputeInfo compute_info;
compute_info.create_state_func = [&, cache, i](ComputeContext* context, void** state) {
if (cache[i].CreateFunctionStateFunc) return cache[i].CreateFunctionStateFunc(reinterpret_cast<OrtComputeContext*>(context), state);
return 0;
};
compute_info.compute_func = [&, cache, i](void* state, const OrtApi* api, OrtKernelContext* context) {
return ToStatus(cache[i].ComputeFunc(state, api, context));
};
compute_info.release_state_func = [&, cache, i](void* state) {
if (cache[i].DestroyFunctionStateFunc) {
cache[i].DestroyFunctionStateFunc(state);
}
};
node_compute_funcs.emplace_back(std::move(compute_info));
}

return Status::OK();
}

virtual std::shared_ptr<KernelRegistry> GetKernelRegistry() const override { return kernel_registry_; }
private:
OrtExecutionProvider* ep_impl_;
std::shared_ptr<KernelRegistry> kernel_registry_; // TODO(leca): should be static local
};
}
34 changes: 34 additions & 0 deletions onnxruntime/core/framework/provider_factory_adapter.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
// Copyright (c) Microsoft Corporation. All rights reserved.

Check warning

Code scanning / lintrunner

CLANGFORMAT/format Warning

See https://clang.llvm.org/docs/ClangFormat.html.
Run lintrunner -a to apply this patch.
// Licensed under the MIT License.

#pragma once
#include "core/providers/providers.h"
#include "provider_adapter.h"

namespace onnxruntime {
struct ExecutionProviderFactoryAdapter : IExecutionProviderFactory {
ExecutionProviderFactoryAdapter(OrtExecutionProviderFactory* ep_factory, const char* const* provider_option_keys, const char* const* provider_option_values, size_t provider_option_length)
: ep_factory_(ep_factory), provider_option_length_(provider_option_length) {
provider_option_keys_.reserve(provider_option_length);
provider_option_values_.reserve(provider_option_length);
keys_.reserve(provider_option_length);
values_.reserve(provider_option_length);
for (size_t i = 0; i < provider_option_length; i++) {
provider_option_keys_.push_back(provider_option_keys[i]);
provider_option_values_.push_back(provider_option_values[i]);
keys_.push_back(provider_option_keys_[i].c_str());
values_.push_back(provider_option_values_[i].c_str());
}
}

std::unique_ptr<IExecutionProvider> CreateProvider() override {
return std::make_unique<ExecutionProviderAdapter>(ep_factory_->CreateExecutionProvider(ep_factory_, keys_.data(), values_.data(), provider_option_length_));
}
OrtExecutionProviderFactory* ep_factory_;
//const char* const* provider_option_keys_;
//const char* const* provider_option_values_;
std::vector<std::string> provider_option_keys_, provider_option_values_;
std::vector<const char*> keys_, values_;
size_t provider_option_length_;
};
}
78 changes: 76 additions & 2 deletions onnxruntime/core/session/custom_ops.cc
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
#include "core/session/inference_session.h"
#include "core/session/ort_apis.h"
#include "core/platform/threadpool.h"
#include "core/framework/ort_type_constraints.h"

// NOTE: OrtKernelContext is used by both custom ops and compiled kernels.
// In a minimal build, ORT_EXTENDED_MINIMAL_BUILD is used to enable EPs like CoreML/NNAPI which use compiled kernels,
Expand All @@ -49,9 +50,9 @@ static constexpr uint32_t min_ort_version_with_shape_inference = 17;
#endif

#if !defined(DISABLE_FLOAT8_TYPES)
#define SUPPORTED_TENSOR_TYPES DataTypeImpl::AllTensorTypesIRv9()
#define SUPPORTED_TENSOR_TYPES onnxruntime::DataTypeImpl::AllTensorTypesIRv9()
#else
#define SUPPORTED_TENSOR_TYPES DataTypeImpl::AllTensorTypesIRv4()
#define SUPPORTED_TENSOR_TYPES onnxruntime::DataTypeImpl::AllTensorTypesIRv4()
#endif

#if defined(ORT_MINIMAL_BUILD)
Expand Down Expand Up @@ -1331,3 +1332,76 @@ common::Status CreateCustomRegistry(gsl::span<OrtCustomOpDomain* const> op_domai

} // namespace onnxruntime
#endif // ENABLE_CUSTOM_OP_API

//namespace onnxruntime {
class FuncManager;
class OpKernelInfo;
onnxruntime::KernelCreateInfo CreateKernelCreateInfo2(const std::string& domain, const OrtCustomOp* op, OrtTypeConstraints* type_constraints) {
const size_t input_count = op->GetInputTypeCount(op);

onnxruntime::KernelDefBuilder def_builder;
def_builder.SetName(op->GetName(op))
.SetDomain(domain);

if (op->version >= min_ort_version_with_custom_version) {
if (op->GetStartVersion && op->GetEndVersion) {
def_builder.SinceVersion(op->GetStartVersion(op), op->GetEndVersion(op));
} else if (op->GetStartVersion) {
def_builder.SinceVersion(op->GetStartVersion(op));
} else {
def_builder.SinceVersion(1);
}
} else {
def_builder.SinceVersion(1);
}

// GetInputMemoryType was introduced in ver 13. This check allows custom ops compiled using older versions
// to work with newer versions (> 12) of the ORT binary.
if (op->version > 12) {
for (size_t i = 0; i < input_count; i++) {
def_builder.InputMemoryType(op->GetInputMemoryType(op, i), gsl::narrow_cast<int>(i));
}
}

const std::unordered_map<std::string, std::set<ONNXTensorElementDataType>>& tc = type_constraints->GetTypeConstraints();
for (const auto& [type_symbol, types] : tc) {
for (const auto& type : types) {
def_builder.TypeConstraint(type_symbol, onnxruntime::DataTypeImpl::TensorTypeFromONNXEnum(static_cast<int>(type))->AsTensorType());
}
}

if (const char* provider_type = op->GetExecutionProviderType(op)) {
def_builder.Provider(provider_type);
} else {
def_builder.Provider(onnxruntime::kCpuExecutionProvider);
}

if (op->version >= 18 && op->GetMayInplace != nullptr) {
int* input_index = nullptr;
int* output_index = nullptr;
size_t len = op->GetMayInplace(&input_index, &output_index);
if (len > 0) {
for (size_t i = 0; i < len; i++) def_builder.MayInplace(input_index[i], output_index[i]);
op->ReleaseMayInplace(input_index, output_index);
}
}

if (op->version >= 18 && op->GetAliasMap != nullptr) {
int* input_index = nullptr;
int* output_index = nullptr;
size_t len = op->GetAliasMap(&input_index, &output_index);
if (len > 0) {
for (size_t i = 0; i < len; i++) def_builder.Alias(input_index[i], output_index[i]);
op->ReleaseAliasMap(input_index, output_index);
}
}

onnxruntime::KernelCreateFn kernel_create_fn = [op](onnxruntime::FuncManager&, const onnxruntime::OpKernelInfo& info,
std::unique_ptr<onnxruntime::OpKernel>& out) -> onnxruntime::common::Status {
out = std::make_unique<onnxruntime::CustomOpKernel>(info, *op);
return onnxruntime::common::Status::OK();
};

return onnxruntime::KernelCreateInfo(def_builder.Build(), kernel_create_fn);
}
//} // namespace onnxruntime
Loading
Loading