Dynamic Batch LightGlue ONNX
This release provides LightGlue ONNX models that support dynamic batch sizes. Depending on where you are running inference, use the corresponding models and signatures:
- ONNX-only -
*_lightglue_pipeline.onnx
$$(2B, 1, H, W)\rightarrow(2B, 1024, 2), (M, 3), (M,)$$ - ONNX Runtime CPU & CUDA -
*_lightglue_pipeline.ort.onnx
$$(2B, 1, H, W)\rightarrow(2B, 1024, 2), (M, 3), (M,)$$ - TensorRT -
*_lightglue_pipeline.trt.onnx
$$(2, 1, 1024, 1024)\rightarrow(2, 1024, 2), (M, 3), (M,)$$
All models were exported with --num-keypoints 1024
. Note that the TensorRT has a static input shape for easier usage and better performance (highly recommended to enable TRT FP16 for the best results). The inputs should follow the interleaved batch convention described in the above blog post.
Essentially, if you have a batch of left images
you would like to match with a batch of right images
interleave them like this:
The outputs are the keypoints, matches, and match scores tensors, where M is the total number of matches across all batches and matches[:, 0]
is the batch index and matches[:, 1:2]
are the match indices between left and right images.