-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About IMC 2023 #47
Comments
Hi @Livesoso In our experiments, we observed that LightGlue is generally better than SuperGlue on all training scenes except for Dioscuri, where SuperGlue is slightly better (but by 2-4% max, not 20%). Our reasoning is that the relative positional encoding makes LightGlue learn the data distribution more effectively. And in the training dataset which we used (MegaDepth), in-plane rotations are non-existent, while they dominate on Dioscuri. However, these rotations can easily be fixed, e.g. from the EXIF data in the image, or with a deep network. Here some results on Dioscuri with a very simple baseline (just hloc, netvlad top50 and SP with 4K keypoints): SP+SG: 0.525 mAA There are also many other cool solutions to the in-plane rotation problem on kaggle, so be sure to check them out! |
Thank you very much! |
Hello,I want to confine some parameters about superpoint and lightglue . When using sp+lg ,the best parameters is lightglue |
Thank you very much for you works
I have some questions about IMC2023. I want to turn my pipeline with your Lightglue, my feature matching is superpoint and superglue which i get 0.65 in heritage_dioscuri scence
But i turn to superpoint and lightglue ,the scores is 0.48 ,i am very confused with the results beacuse the large decline .
It is Strange beacuse in other scences the scores improved.
The two ways have the same settings with resized to 1600 and the number of superpoint is 2048
Thank you
The text was updated successfully, but these errors were encountered: