Skip to content

v2.0: Dynamic Batch LightGlue-ONNX

Latest
Compare
Choose a tag to compare
@fabio-sim fabio-sim released this 17 Jul 15:49
· 11 commits to main since this release
9ebf215

Dynamic Batch LightGlue ONNX

Blog Post

This release provides LightGlue ONNX models that support dynamic batch sizes. Depending on where you are running inference, use the corresponding models and signatures:

  • ONNX-only - *_lightglue_pipeline.onnx
    $$(2B, 1, H, W)\rightarrow(2B, 1024, 2), (M, 3), (M,)$$
  • ONNX Runtime CPU & CUDA - *_lightglue_pipeline.ort.onnx
    $$(2B, 1, H, W)\rightarrow(2B, 1024, 2), (M, 3), (M,)$$
  • TensorRT - *_lightglue_pipeline.trt.onnx
    $$(2, 1, 1024, 1024)\rightarrow(2, 1024, 2), (M, 3), (M,)$$

All models were exported with --num-keypoints 1024. Note that the TensorRT has a static input shape for easier usage and better performance (highly recommended to enable TRT FP16 for the best results). The inputs should follow the interleaved batch convention described in the above blog post.

Essentially, if you have a batch of left images
$$[L_0, L_1, L_2, \ldots]$$
you would like to match with a batch of right images
$$[R_0, R_1, R_2, \ldots]$$
interleave them like this:
$$[L_0, R_0, L_1, R_1, L_2, R_2, \ldots]$$

The outputs are the keypoints, matches, and match scores tensors, where M is the total number of matches across all batches and matches[:, 0] is the batch index and matches[:, 1:2] are the match indices between left and right images.