You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to get Kimera-VIO ROS up and running in Mono-IMU mode for a simulated synthetic datasets and currently I'm having difficulty understanding why the optimized trajectory doesn't match the odometry.
Currently running Kimera in online mode on my dataset it instantly starts printing out an optimized trajectory that severely drifts away from the refrence frame. The mono image outputs a a really good reference point tracking and I'm confused on what parameters I should work on tunning or if this is a limitation of the current Kimera VIO implimentation. I have been reading other similar problems people are having with trajectory drift and I'm curious if anyone has resolved this problem.
Additional Question:
How do you sepecify if you're passing in a rectified image ? From my understanding the current implementation assumes that all cameras come with distortion.
Given a monocular RGB-D image where the depth is 32 bit image is there a way to feed this into kimera to use for mono mode or is this limited to stero mode currently?
Lastly the rgb pointcloud generated from tracking currently only publishes every half second or so is this due to lack of enough features being tracked or alignment error?
The text was updated successfully, but these errors were encountered:
Hey Everyone,
I've been trying to get Kimera-VIO ROS up and running in Mono-IMU mode for a simulated synthetic datasets and currently I'm having difficulty understanding why the optimized trajectory doesn't match the odometry.
Currently running Kimera in online mode on my dataset it instantly starts printing out an optimized trajectory that severely drifts away from the refrence frame. The mono image outputs a a really good reference point tracking and I'm confused on what parameters I should work on tunning or if this is a limitation of the current Kimera VIO implimentation. I have been reading other similar problems people are having with trajectory drift and I'm curious if anyone has resolved this problem.
Additional Question:
How do you sepecify if you're passing in a rectified image ? From my understanding the current implementation assumes that all cameras come with distortion.
Given a monocular RGB-D image where the depth is 32 bit image is there a way to feed this into kimera to use for mono mode or is this limited to stero mode currently?
Lastly the rgb pointcloud generated from tracking currently only publishes every half second or so is this due to lack of enough features being tracked or alignment error?
The text was updated successfully, but these errors were encountered: