- LOAMS (LiDAR odometry & mapping)
- LOAM 2014: Lidar Odometry and Mapping in Real-time
- LOAM 2016: Low-drift and real-time lidar odometry and mapping
- V-LOAM 2015: Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast
- LeGO-LOAM 2018: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
- Simplified LOAM (for ground vehicle) + Loop Closing
- LIO (lidar- IMU odometry)
- LIO 2019: Tightly Coupled 3D Lidar Inertial Odometry and Mapping
- LIO-SAM 2020: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
- LeGO-LOAM + IMU + GPS
- LVI-SAM 2021: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
- LIO-SAM + Vision
- DeepLIO 2021: Deep LIDAR Inertial Sensor Fusion for Odometry Estimation
-
Unsupervised Learning of Lidar Features for Use in a Probabilistic Trajectory Estimator
- ICRA 2021 Best Student Paper Award.
- Tim Barfoot - Autonomous Space Robotics Lab, UoT
- Unsupervised Learning of Lidar Features
* Detector Score: The higher the score, the more accurate the model is in its detections. -
Radar Odometry Combining Probabilistic Estimation and Unsupervised Feature Learning
-
Self-supervised Learning of LiDAR Odometry for Robotic Applications
- Robotic Systems Lab, ETH Zurich
- Geometric Loss
-
Dynamic Object Aware LiDAR SLAM based on Automatic Generationof Training Data
-
Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation
UWB UoT, Prof. Angela Schoellig, Dynamic Systems Lab
2021: Learning-based Bias Correction for Time Difference of Arrival Ultra-wideband Localization of Resource-constrained Mobile Robots
- Due to the model nonlinearity, we use an M-estimation based extended Kalman filter (EKF) to estimate the states. Replacing the quadratic cost function in a conventional Kalman filter with a robust cost function ρ(·)—e.g. Geman- McClure (G-M), Huber or Cauchy.
2020: Learning-based Bias Correction for Ultra-wideband Localizationof Resource-constrained Mobile Robots
- Benchmark Dataset of Ultra-Wideband Radio Based UAV Positioning
- Automated Tuning of End-to-end Neural Flight Controllers for Autonomous Nano-drones
- Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
- Heading Estimation Using Ultra-wideband Received Signal Strength and Gaussian Processes
- Sensor Information Sharing Using a Producer-Consumer Algorithm on Small Vehicles
-
Statistical outlier rejection
- Using robot’s dynamics to filter inconsistent UWB range measurements: the maximum distance max a quadcopter can cover during time ∆t: dmax = ‖v∆t + 1 2 amax∆t2‖
- statistical hypothesis: S = GPG + R (EKF: use the χ2 hypothesis test to determine whether a measurement innovation is likely coming from the distribution)
-
NN Bias compensation: spatially varying bias of TWR and TDoA measurements
- Exclusively trained our NN with measurement whose actual bias less within a threshold of 0.7m
- Mitigation of UWB TWR measurement errors: (most of them leverage probabilistic methods)
- Systematic biases
- NLOS
- Multipath propagation
- Since multi-path and NLOS propagation effects depend on a particular indoor environment, we only use the NN to explicitly model the pose-dependent bias.
Visual Localization UoT, Prof. Jonathan Kelly, STARS Lab
- 2020: Self-Supervised Deep Pose Corrections for Robust Visual Odometry
- 2018: VDPC-Net: Deep Pose Correction for Visual Localization
2018 | 2020 |
---|---|
They replace the supervised loss of DPC-Net with a photometric reconstruction loss that does not require any external ground truth pose information. Note: Self supervised vs unsupervised: There is a supervised training signal in these methods.
Tcorr corrects a classical VO estimate:
To parameterize this correction:
- In contrast to other methods that completely replace a classical visual estimator with a deep network, we propose an approach that uses a convolutional neural network to learn difficult-to-model corrections to the estimator from ground-truth training data.
- A novel loss function for learning SE(3) corrections based on a matrix Lie groups approach, with a natural formulation for balancing translation and rotation errors
- They use this loss to train a Deep Pose Correction network (DPC-Net) that predicts corrections for a particular estimator, sensor and environment.
- Others for loss function : based on SE(3) geodesic distance.
- Their loss naturally balances translation and rotation error without requiring a hand-tuned scalar hyper-paramete
Learning error models for graph SLAM
[Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning] (https://starslab.ca/)
1- Learning Wheel Odometry and IMU Errors for Localization
2- A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty
3- The Complex-Step Derivative Approximation on Matrix Lie Groups
4- Probabilistic Regression of Rotations using Quaternion Averaging and a Deep Multi-Headed Network
5- mproving Learning-based Ego-motion Estimation with Homomorphism-based Losses and Drift Correction
5- A Stacked LSTM-Based Approach for Reducing Semantic Pose Estimation ErrorRana
- DeepLIO