-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About Inference time #36
Comments
How many keypoints are there in the two images? if less than 1k, can you compare the runtimes with point pruning turned off? We indeed need to add some checks in the model. |
How much inference time? |
@skydes Two images have 274 and 265 keypoints respectively 。 point pruning turned off or not, lightglue takes more time |
@kvnptl |
We realized that run time overhead limited the inference speed of LightGlue with few keypoints. We pushed some improvements in PR #37. Could you maybe checkout the corresponding branch and run the benchmark script:
To add SuperGlue in the benchmark you also need hloc. |
I meet similar problem. For the image DSC_0410.JPG and DSC_0411.JPG, it seems that in the case of closing the "flash-attention", Lightglue can be only ~30% faster than superglue? I do this evaluation on rtx2070. (I have excluded the time comsumed by superpoint) |
This looks good to me. Note that in the plot above we benchmark against SuperGlue-fast, with fewer sinkhorn iterations. To compare against the original SuperGlue, you should uncomment the line here. The main run time improvements of LightGlue are its adaptiveness, and this is clearly visible in your benchmark. |
I tested lightglue and superglue using both CPU and GPU ,In both cases, superglue takes less time,but in paper,lightglue is faster。I want to know why there is such a contradiction,and my CPU:Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz ,GPU:
RTX 2080Ti
The text was updated successfully, but these errors were encountered: