You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thank you for the work and for making the code available. For a use case, I would like to run the model for more than two images and get the poses of the camera. The naive way was to run with the reference image fixed while the destination changes. But that did not work. Could you comment on how I could run the model on more than 2 images and use the output poses?
The text was updated successfully, but these errors were encountered:
We have only experimented with two images, multi-view is still something we need to explore. Having said that, I would have started doing the same that you did, using one image as the anchor and computing the relative poses of all the other images wrt the anchor.
Can you give some details as to why that did not work? Is it an accuracy problem, or does it seem like the coordinate system is all wrong?
It could be an accuracy issue looking at the GT and the predicted poses in the attached image. The GT poses form a semi-circle, the expectation was that predicted poses follow the same to some degree. But some of the images that are close to the anchor (The camera at the origin) also had issues).
Any hints on what could be the reason for this?
(Edit)
This is a separate doubt, but is the returned R,t the rotation of cam1 wrt cam2 or the other way around?
Hi, Thank you for the work and for making the code available. For a use case, I would like to run the model for more than two images and get the poses of the camera. The naive way was to run with the reference image fixed while the destination changes. But that did not work. Could you comment on how I could run the model on more than 2 images and use the output poses?
The text was updated successfully, but these errors were encountered: