Back to Search Start Over

A multi-sensor fusion framework with tight coupling for precise positioning and optimization.

Authors :
Xia, Yu
Wu, Hongwei
Zhu, Liucun
Qi, Weiwei
Zhang, Shushu
Zhu, Junwu
Source :
Signal Processing. Apr2024, Vol. 217, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

In the dynamic landscape of artificial intelligence and robotics, the pursuit of accurate positioning in mobile robots has intensified. This research addresses the limitations of single-sensor SLAM (Simultaneous Localization and Mapping) techniques in complex settings by harnessing the collective strengths of LiDAR (Light Detection And Ranging), Camera, IMU (Inertial Measurement Unit), and GNSS (Global Navigation Satellite System) sensors. The proposed multi-sensor tightly-coupled SLAM framework is an integration of point-line feature-based laser–visual–inertial odometry, visual–laser fusion loop closure detection, and factor graph-based back-end optimization. Within the visual–inertial subsystem, an advanced LSD (Line Segment Detector) feature extraction strategy is introduced, incorporating point-line fusion to enhance visual line features. Additionally, the laser point cloud is projected onto the camera coordinate system, establishing depth associations with visual attributes. Strengthening the robustness of the visual–inertial subsystem in low-texture environments, camera poses undergo optimization through a sliding-window bundle adjustment method. In the laser-inertial subsystem, IMU preintegration mitigates laser point cloud distortion. Extracting edge and plane features, coupled with frame-to-local-map matching, enhances matching efficiency while streamlining computational intricacies. This amalgamation forms the basis of the laser–visual–inertial odometry fusion system. To overcome the limitations of standalone visual and laser-based loop closure detection, a dual-loop closure method utilizing visual–laser fusion is proposed. Leveraging the DBoW2 bag-of-words model, complemented by temporal–spatial consistency checks, enhances detection efficiency and accuracy. The integration of GNSS factors imparts global constraints for expansive outdoor scenarios. Employing factor graph-based back-end optimization, the refinement of laser–visual–inertial odometry factors, visual–inertial odometry factors, IMU preintegration factors, loop closure factors, and GNSS factors culminates in precise global pose estimation and high-fidelity point cloud maps. Through rigorous evaluation of the M2DGR dataset and a mobile robot platform, the proposed methodology emerges as an exemplar of performance, showcasing superiority over the state-of-the-art LIO-SAM technique. Achieving a reduction of 2.86 m and 3.23 m in the root mean square error of absolute pose estimation across divergent environments, this approach exhibits remarkable efficacy in outdoor scenarios, thereby elevating the precision and resilience of SLAM algorithms for mobile robots. • Introduces a tightly coupled multi-sensor SLAM framework that harnesses the synergies of LiDAR, Camera, IMU, and GNSS. • Laser–visual–inertial odometry fusion system optimizes pose estimation, overcoming limitations of standalone methods. • Visual–laser fusion and GNSS factors improve loop closure efficiency, contributing to precise global pose estimation. • Refines laser–visual–inertial odometry, visual–inertial odometry, IMU preintegration, loop closure, and GNSS factors. • The proposed methodology excels in outdoor scenarios, elevating precision and resilience of SLAM algorithms for mobile robots. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01651684
Volume :
217
Database :
Academic Search Index
Journal :
Signal Processing
Publication Type :
Academic Journal
Accession number :
174545800
Full Text :
https://doi.org/10.1016/j.sigpro.2023.109343