The full list of workshops are available at [https://www.ieee-icra.org/workshop.html].
- Opportunities and Challenges with Autonomous Racing
- Robust Perception For Autonomous Field Robots in Challenging Environments
- Visual-Inertial Navigation Systems
- Resilient and Long-Term Autonomy for Aerial Robotic Systems
- The flying Cartographer: Using a graph SLAM method for a long-term UAV navigation
- First International Workshop on Perception and Action in Highly-Dynamic Environments
- CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth
- Interval-Based Visual-LiDAR Sensor Fusion
- OmniDet: Surround View Cameras Based Multi-Task Visual Perception Network for Autonomous Driving
- Unsupervised Learning of Lidar Features for Use in a Probabilistic Trajectory Estimator
- VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments
- PSF-LO: Parameterized Semantic Features Based Lidar Odometry
- Self-supervised Learning of LiDAR Odometry for Robotic Applications
- ENCODE: A dEep poiNt Cloud ODometry NEtwork
- Automatic Hyper-Parameter Tuning for Black-Box LiDAR Odometry
- R-LOAM: Improving LiDAR Odometry and Mapping with Point-To-Mesh Features of a Known 3D Reference Object
- MULLS: Versatile LiDAR SLAM Via Multi-Metric Linear Least Square
- Dynamic Object Aware LiDAR SLAM Based on Automatic Generation of Training Data
- LiTAMIN2: Ultra Light LiDAR-Based SLAM Using Geometric Approximation Applied with KL-Divergence
- Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment
- SA-LOAM: Semantic-Aided LiDAR SLAM with Loop Closure
- Inertial Aided 3D LiDAR SLAM with Hybrid Geometric Primitives in Large-Scale Environments
- UPSLAM: Union of Panoramas SLAM
- PocoNet: SLAM-Oriented 3D LiDAR Point Cloud Online Compression Network
- LoLa-SLAM: Low-Latency LiDAR SLAM Using Continuous Scan Slicing
- Greedy-Based Feature Selection for Efficient LiDAR SLAM
- 2D Laser SLAM with Closed Shape Features: Fourier Series Parameterization and Submap Joining
- Deep Online Correction for Monocular Visual Odometry
- OV2SLAM : A Fully Online and Versatile Visual SLAM for Real-Time Applications
- Semantic SLAM with Autonomous Object-Level Data Association
- Asynchronous Multi-View SLAM
- Distributed Variable-Baseline Stereo SLAM from Two UAVs
- SD-DefSLAM: Semi-Direct Monocular SLAM for Deformable and Intracorporeal Scenes
- TT-SLAM: Dense Monocular SLAM for Planar Environments
- Hybrid Bird's-Eye Edge Based Semantic Visual SLAM for Automated Valet Parking
- A Front-End for Dense Monocular SLAM Using a Learned Outlier Mask Prior
- Avoiding Degeneracy for Monocular Visual SLAM with Point and Line Features
- DefSLAM: Tracking and Mapping of Deforming Scenes from Monocular Sequences
- DOT: Dynamic Object Tracking for Visual SLAM
- CamVox: A Low-Cost and Accurate Lidar-Assisted Visual SLAM System
- Visual-Laser-Inertial SLAM Using a Compact 3D Scanner for Confined Space
- LIRO: Tightly Coupled Lidar-Inertia-Ranging Odometry
- KFS-LIO: Key-Feature Selection for Lightweight Lidar Inertial Odometry
- FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter
- CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth
- Range-Visual-Inertial Odometry: Scale Observability without Excitation
- VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments
- Collaborative Visual Inertial SLAM for Multiple Smart Phones
- Cooperative Visual-Inertial Odometry
- Bidirectional Trajectory Computation for Odometer-Aided Visual-Inertial SLAM
- Optimization-Based Visual-Inertial SLAM Tightly Coupled with Raw GNSS Measurements
- Revisiting Visual-Inertial Structure-From-Motion for Odometry and SLAM Initialization
- Direct Sparse Stereo Visual-Inertial Global Odometry
- Bidirectional Trajectory Computation for Odometer-Aided Visual-Inertial SLAM
- LVI-SAM: Tightly-Coupled Lidar-Visual-Inertial Odometry Via Smoothing and Mapping
- Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry
- Multi-Parameter Optimization for a Robust RGB-D SLAM System
- RGB-D SLAM with Structural Regularities
- Towards Real-Time Semantic RGB-D SLAM in Dynamic Environments
- LatentSLAM: Unsupervised Multi-Sensor Representation Learning for Localization and Mapping
- Tactile SLAM: Real-Time Inference of Shape and Pose from Planar Pushing
- ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames
- Robust Underwater Visual SLAM Fusing Acoustic Sensing
- Distributed Client-Server Optimization for SLAM with Limited On-Device Resources
- Invariant EKF Based 2D Active SLAM with Exploration Task
- Compositional and Scalable Object SLAM
- Markov Parallel Tracking and Mapping for Probabilistic SLAM
- Multi-Session Underwater Pose-Graph SLAM Using Inter-Session Opti-Acoustic Two-View Factor
- Online Range-Based SLAM Using B-Spline Surfaces
- RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects
- Connecting Semantic Building Information Models and Robotics: An Application to 2D LiDAR-Based Localization
- CAROM - Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures
- Semantic Reinforced Attention Learning for Visual Place Recognition
- Real-Time Semantic Segmentation with Fast Attention
- YolactEdge: Real-Time Instance Segmentation on the Edge
- Learning Panoptic Segmentation from Instance Contours
- GPU-Efficient Dense Convolutional Network for Real-Time Semantic Segmentation
- Target-Targeted Domain Adaptation for Unsupervised Semantic Segmentation
- S3Net: 3D LiDAR Sparse Semantic Segmentation Network
- Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation
- Lite-HDSeg: LiDAR Semantic Segmentation Using Lite Harmonic Dense Convolutions
- LiDARNet: A Boundary-Aware Domain Adaptation Model for Point Cloud Semantic Segmentation
- Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis
- A Benchmark for LiDAR-Based Panoptic Segmentation Based on KITTI
- Neighborhood Spatial Aggregation Based Efficient Uncertainty Estimation for Point Cloud Semantic Segmentation
- Ground-Aware Monocular 3D Object Detection for Autonomous Driving
- MonoSOD: Monocular Salient Object Detection Based on Predicted Depth
- UniFuse: Unidirectional Fusion for 360 Panorama Depth Estimation
- Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation
- Combining Events and Frames Using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction
- Self-Supervised Learning for Monocular Depth Estimation on Minimally Invasive Surgery Scenes
- Depth Estimation under Motion with Single Pair Rolling Shutter Stereo Images
- Learning a Geometric Representation for Data-Efficient Depth Estimation Via Gradient Field and Contrastive Loss
- Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth Estimation
- PLG-IN: Pluggable Geometric Consistency Loss with Wasserstein Distance in Monocular Depth Estimation
- Bidirectional Attention Network for Monocular Depth Estimation
- Deep Multi-View Depth Estimation with Predicted Uncertainty
- MultiViewStereoNet: Fast Multi-View Stereo Depth Estimation Using Incremental Viewpoint-Compensated Feature Extraction
- Toward Robust and Efficient Online Adaptation for Deep Stereo Depth Estimation
- PENet: Towards Precise and Efficient Image Guided Depth Completion
- DenseLiDAR: A Real-Time Pseudo Dense Depth Guided Depth Completion Network
- Robust Monocular Visual-Inertial Depth Completion for Embedded Systems
- Learning Topology from Synthetic Data for Unsupervised Depth Completion
- SelfDeco: Self-Supervised Monocular Depth Completion in Challenging Indoor Environments
- An Adaptive Framework for Learning Unsupervised Depth Completion
- MDANet: Multi-Modal Deep Aggregation Network for Depth Completion
- Stereo-Augmented Depth Completion from a Single RGB-LiDAR Image
- Self-Guided Instance-Aware Network for Depth Completion and Enhancement
- Linear Inverse Problem for Depth Completion with RGB Image and Sparse LIDAR Fusion