1. An improved SLAM based on RK-VIF: Vision and inertial information fusion via Runge-Kutta method
- Author
-
Jia-shan Cui, Fei Li, Dongzhu Feng, Qi-chen Tian, Cong Li, and Fang-rui Zhang
- Subjects
State variable ,Inertial frame of reference ,business.industry ,Computer science ,Mechanical Engineering ,Metals and Alloys ,Computational Mechanics ,Reprojection error ,Simultaneous localization and mapping ,Moment (mathematics) ,Runge–Kutta methods ,Inertial measurement unit ,Sliding window protocol ,Ceramics and Composites ,Computer vision ,Artificial intelligence ,business - Abstract
Simultaneous Localization and Mapping (SLAM) is the foundation of autonomous navigation for unmanned systems. The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM) equipped with camera and the lidar SLAM equipped with lidar. However, pure visual SLAM have shortcomings such as low positioning accuracy, the paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration. First, the Inertial Measurement Unit (IMU) information between two adjacent keyframes is pre-integrated at the front-end to provide IMU constraints for visual-inertial information fusion. In particular, to improve the accuracy in pre-integration, the paper uses the Runge-Kutta algorithm instead of Euler integral to calculate the pre-integration value at the next moment. Then, the IMU pre-integration value is used as the initial value of the system state at the current frame time. We combine the visual reprojection error and imu pre-integration error to optimize the state variables such as speed and pose, and restore map points' three-dimensional coordinates. Finally, we set a sliding window to optimize map points' coordinates and state variables. The experimental part is divided into dataset experiment and complex indoor-environment experiment. The results show that compared with pure visual SLAM and the existing visual-inertial fusion SLAM, our method has higher positioning accuracy.
- Published
- 2023