Dynamic axes / why not dynamic everything? #5452
Replies: 2 comments
-
In early testing, I'm not seeing any performance difference for fixing batch size versus not - so is there any reason not to chose the most flexible option? |
Beta Was this translation helpful? Give feedback.
-
One tensor can only have one dimension been dynamic. I don't know if ONNX spec had said that, but at application level I can't implement such things. A tensor is an n-dim array. If you have multiple tensors with same dimensions like:
Then I can pack these three as a single tensor with shape of
Without losing any information. And usually we call the first dim as "batch size". However, if the dims are different, then I need an extra array to store the shapes of each tensor, it is not how it is supposed to work. |
Beta Was this translation helpful? Give feedback.
-
I understand that when calling
torch.onnx.export
I can specify dynamic axes for whichever dimensions of the input tensor I want to be able to vary during inference. Why not simply make all axes dynamic (so we can later change our minds about batch size or image size)? I assume there is some performance reason for this?Beta Was this translation helpful? Give feedback.
All reactions