Skip to content

Latest commit

 

History

History
165 lines (129 loc) · 10.2 KB

Results_for_comparison.md

File metadata and controls

165 lines (129 loc) · 10.2 KB

Results (Raw Trajectories) of Our Methods

For the convenience of the community, we release the results of our methods, including the estimated 6-DoF pose and the ground truth, in the form of rosbag. Since our source code is only internal-accessed and not publicly available, we figure out this solution for benchmark testing and performance comparison rather than re-run the source code. We strongly recommend the peers to evaluate their proposed works using our dataset and do the comparison with the raw results from our methods using their own accuracy criterion.

All the data sequences are evaluated in real-time. For the readers who are interested in the qualitative performance, please refer to our bilibili: 👍 ⭐ 💰 Guan Weipeng or Chen Peiyu. The evaluations of our works in different data sequences are recorded in video.

Tips: we recommend this Python package for the evaluation of odometry and SLAM. e.g. If you want to align all estimated trajectory with ground truth in SE3 to calculate the Absolute Trajectory Error (ATE), you can run the following command:

evo_ape bag bag_name.bag /ground_truth_pose /estimated_pose -va -p

More details of the trajectory evaluation can be seen in Link.

Our Own Dataset

Evaluation results in our stereo event dataset which are designed for stereo event-based VIO. Although this dataset uses stereo event cameras, it also would be good choice for the evaluation of monocular event-based VIO method.

Sequence Name PL-EVIO ESIO ESVIO
Our results (raw trajectory) link link link
hku_agg_translation 0.048 0.35 0.063
hku_agg_rotation 0.15 0.51 0.11
hku_agg_flip 0.15 1.23 0.14
hku_agg_walk 0.37 1.14 0.27
hku_hdr_circle 0.068 0.23 0.081
hku_hdr_slow 0.069 0.17 0.059
hku_hdr_tran_rota 0.068 0.60 0.065
hku_hdr_agg 0.14 1.37 0.10
hku_dark_normal 1.25 0.32 0.39

Evaluation results from monocular purely event-based VIO using different resolution event cameras DAVIS346 (346x260) and DVXplorer (640x480) in our monocular event dataset:

Sequence Name EIO in DAVIS346 EIO in DVXplorer PL-EIO in DAVIS346 PL-EIO in DVXplorer
Our results (raw trajectory) link link link link
vicon_aggressive_hdr 0.66 0.65 0.62 0.62
vicon_dark1 1.02 0.35 0.64 0.51
vicon_dark2 0.49 0.41 0.30 0.38
vicon_darktolight1 0.81 0.78 0.66 0.71
vicon_darktolight2 0.42 0.44 0.51 0.56
vicon_hdr1 0.59 0.30 0.67 0.47
vicon_hdr2 0.74 0.37 0.45 0.22
vicon_hdr3 0.72 0.69 0.74 0.47
vicon_hdr4 0.37 0.26 0.37 0.27
vicon_lighttodark1 0.29 0.42 0.33 0.43
vicon_lighttodark2 0.79 0.73 0.53 0.67

Public Dataset

VECtor

Evaluation results in the VECtor.

Sequence Name PL-EVIO ESVIO
Our results (raw trajectory) link link
board-slow --- ---
corner-slow 0.017 0.012
robot-normal 0.027 0.043
robot-fast 0.037 0.042
desk-normal 0.31 0.052
desk-fast 0.043 0.042
sofa-normal 0.058 0.047
sofa-fast 0.050 0.052
mountain-normal 0.32 0.044
mountain-fast 0.031 0.039
hdr-normal 0.12 0.017
hdr-fast 0.036 0.039
corridors-dolly 1.23 0.88
corridors-walk 0.72 0.34
school-dolly 3.11 0.53
school-scooter 1.39 0.63
units-dolly 13.82 8.12
units-scooter 11.66 6.64

MVSEC

Evaluation results in MVSEC. Note that we use the whole and raw sequences without any timestamp modification, rather than just run part of the rosbag just like ESVO.

Sequence Name PL-EVIO ESVIO
Our results (raw trajectory) link link
Indoor Flying 1 0.36 0.25
Indoor Flying 2 0.30 0.30
Indoor Flying 3 0.34 0.25
Indoor Flying 4 0.44 0.46

DAVIS 240C Datasets

Evaluation results in DAVIS 240C Datasets. We also refer the raw results of the other event-based VIO works (EIO: purely event-based VIO; EVIO: Event+Image VIO) as following:

Sequence Name CVPR17 EIO BMVC17 EIO UltimateSLAM EIO UltimateSLAM EVIO 3DV19 EIO RAL22 EVIO IROS22 EIO Our IROS22 EIO PL-EVIO
Results (raw trajectory) --- --- --- --- --- --- --- link ---
boxes_translation 2.69 0.57 0.76 0.27 2.55 0.48 1.0 0.34 0.06
hdr_boxes 1.23 0.92 0.67 0.37 1.75 0.46 1.8 0.40 0.10
boxes_6dof 3.61 0.69 0.44 0.30 2.03 0.84 1.5 0.61 0.21
dynamic_translation 1.90 0.47 0.59 0.18 1.32 0.40 0.9 0.26 0.24
dynamic_6dof 4.07 0.54 0.38 0.19 0.52 0.79 1.5 0.43 0.48
poster_translation 0.94 0.89 0.15 0.12 1.34 0.35 1.9 0.40 0.54
hdr_poster 2.63 0.59 0.49 0.31 0.57 0.65 2.8 0.40 0.12
poster_6dof 3.56 0.82 0.30 0.28 1.50 0.35 1.2 0.26 0.14

Tips: The estimated and ground-truth trajectories were aligned with a 6-DOF transformation (in SE3), using 5 seconds [0-5s] of the resulting trajectory. The result is obtained through computing the mean position error (Euclidean distance in meters) as percentages of the total traveled distance of the ground truth. Unit: %/m (e.g. 0.24 means the average error would be 0.24m for 100m motion). BTW: both BMVC17 and Ultimate SLAM release their raw results of this dataset in here. However, it seems that the released results is worse than the ones in the paper.

DSEC

ESIO/ESVIO evaluation results on DSEC. The raw trajectories can be found in here

image

Since the DSEC dataset does not provide the ground truth 6-DoF poses, we only show the qualitative results, the tracking performance of event-based and image-based features, of (a) ESIO and (b) ESVIO for the DSEC dataset sequences zurich_city_04_a to zurich_city_04_f.