Replies: 2 comments
-
See comment in bold below. This is just a restatement of the question in the prior post: Is there a way to get the 'given tensor shape' from the graph, within the Compute() method (using context, I suppose)? From include/onnxruntime/core/framework/op_kernel.h: // Fetch output (non-tensor) with specified index.
} // In the case that memory allocation has not been done for an output tensor, |
Beta Was this translation helpful? Give feedback.
-
Currently there's no way to fetch the output shape from the graph using OpKernelContext. |
Beta Was this translation helpful? Give feedback.
-
I see that Input shapes are available in the context:
Status MyNewContribOp::Compute(OpKernelContext* context) const {
auto X = context->Input(0);
auto& dims = X->Shape();
But I would like that to be true for Outputs as well.
I don't want to hard code it:
TensorShape output_shape_nms({1, 15130, 4});
Tensor* output_to_nms = context->Output(1, output_shape_nms);
The shape is available when I graph my model, as my outputs are inputs to other nodes. See screenshot of my model displayed with Netron.
Is there a way to parse the graph and find the expected shape of my output? I don't see Output shape in the context.
Beta Was this translation helpful? Give feedback.
All reactions