You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I trained the neural network that provides (x,y) coordinates of objects and descriptors for them (such as SuperPoint).
When I use GT points and LightGlue, the matching works excellently. But when I use (x, y) estimations, LightGlue strongly filters matches (for my example, I have 264 detected points, when LightGlue returns 110 matches).
It is worth highlighting that the objects between frames can lightly move.
Each model of LightGlue is trained for a given type of descriptors only. Using LightGlue with your own descriptors requires retraining it. We do not provide the training code but will do so soon.
The models that we released are trained on static scenes (MegaDepth) and are thus likely to reject matches between dynamic objects, even if they move only slightly. You might need to retrain with your won data for optimal performance.
Hi,
I trained the neural network that provides (x,y) coordinates of objects and descriptors for them (such as SuperPoint).
When I use GT points and LightGlue, the matching works excellently. But when I use (x, y) estimations, LightGlue strongly filters matches (for my example, I have 264 detected points, when LightGlue returns 110 matches).
It is worth highlighting that the objects between frames can lightly move.
I set the parameters:
It is possible to extend the algorithm for this purpose? Could you suggest any changes/improvements for such a task?
The text was updated successfully, but these errors were encountered: