6,430 results on '"SLAM"'
Search Results
2. SGS-SLAM: Semantic Gaussian Splatting for Neural Dense SLAM
- Author
-
Li, Mingrui, Liu, Shuhong, Zhou, Heng, Zhu, Guohao, Cheng, Na, Deng, Tianchen, Wang, Hongyu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. RGBD GS-ICP SLAM
- Author
-
Ha, Seongbo, Yeon, Jiung, Yu, Hyeonwoo, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Deep Patch Visual SLAM
- Author
-
Lipson, Lahav, Teed, Zachary, Deng, Jia, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Ground Plane Synchronization in VR Applications Using Indoor Robots for Enhancing Immersion
- Author
-
Divya, Udayan J., Hrishikesh, P., Sylesh, Nithin, Menath, Madhav M., Yadukrishnan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Kumar, Amit, editor, Gunjan, Vinit Kumar, editor, Senatore, Sabrina, editor, and Hu, Yu-Chen, editor
- Published
- 2025
- Full Text
- View/download PDF
6. A Compact Handheld Sensor Package with Sensor Fusion for Comprehensive and Robust 3D Mapping
- Author
-
Wei, Peng, Fu, Kaiming, Villacres, Juan, Ke, Thomas, Krachenfels, Kay, Stofer, Curtis Ryan, Bayati, Nima, Gao, Qikai, Zhang, Bill, Vanacker, Eric, and Kong, Zhaodan
- Subjects
Data Management and Data Science ,Geomatic Engineering ,Information and Computing Sciences ,Engineering ,3D mapping ,sensor fusion ,SLAM ,thermal camera ,Analytical Chemistry ,Environmental Science and Management ,Ecology ,Distributed Computing ,Electrical and Electronic Engineering ,Electrical engineering ,Electronics ,sensors and digital hardware ,Environmental management ,Distributed computing and systems software - Abstract
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields.
- Published
- 2024
7. Laser-inertial tightly coupled SLAM system for indoor degraded environments.
- Author
-
Li, Sen, Guan, He, Ma, Xiaofei, Liu, Hezhao, Zhang, Dan, Wu, Zeqi, and Li, Huaizhou
- Abstract
Purpose: To address the issues of low localization and mapping accuracy, as well as map ghosting and drift, in indoor degraded environments using light detection and ranging-simultaneous localization and mapping (LiDAR SLAM), a real-time localization and mapping system integrating filtering and graph optimization theory is proposed. By incorporating filtering algorithms, the system effectively reduces localization errors and environmental noise. In addition, leveraging graph optimization theory, it optimizes the poses and positions throughout the SLAM process, further enhancing map accuracy and consistency. The purpose of this study resolves common problems such as map ghosting and drift, thereby achieving more precise real-time localization and mapping results. Design/methodology/approach: The system consists of three main components: point cloud data preprocessing, tightly coupled inertial odometry based on filtering and backend pose graph optimization. First, point cloud data preprocessing uses the random sample consensus algorithm to segment the ground and extract ground model parameters, which are then used to construct ground constraint factors in backend optimization. Second, the frontend tightly coupled inertial odometry uses iterative error-state Kalman filtering, where the LiDAR odometry serves as observations and the inertial measurement unit preintegration results as predictions. By constructing a joint function, filtering fusion yields a more accurate LiDAR-inertial odometry. Finally, the backend incorporates graph optimization theory, introducing loop closure factors, ground constraint factors and odometry factors from frame-to-frame matching as constraints. This forms a factor graph that optimizes the map's poses. The loop closure factor uses an improved scan-text-based loop closure detection algorithm for position recognition, reducing the rate of environmental misidentification. Findings: A SLAM system integrating filtering and graph optimization technique has been proposed, demonstrating improvements of 35.3%, 37.6% and 40.8% in localization and mapping accuracy compared to ALOAM, lightweight and ground optimized lidar odometry and mapping and LiDAR inertial odometry via smoothing and mapping, respectively. The system exhibits enhanced robustness in challenging environments. Originality/value: This study introduces a frontend laser-inertial odometry tightly coupled filtering method and a backend graph optimization method improved by loop closure detection. This approach demonstrates superior robustness in indoor localization and mapping accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A collaborative SLAM method for dual payload-carrying UAVs in denied environments.
- Author
-
Rao, Jinjun, Liu, Nengwei, Chen, Jinbo, Liu, Mei, Lei, Jingtao, and Giernacki, Wojciech
- Abstract
In this paper, we investigate the problem of collaborative localization in search tasks in denied environments, particularly when traditional visual-inertial localization techniques reach their limits. A novel fusion localization method is proposed. It couples dual Payload-Carrying unmanned aerial vehicles (UAVs) using collaborative simultaneous localization and mapping (SLAM) techniques. This method aims to improve the system's search range and payload capacity. The paper utilizes SLAM technology to achieve self-motion estimation, reducing dependence on external devices. It incorporates a collaborative SLAM backend that provides the necessary information for system navigation, path planning, and motion control, ensuring consistent localization coordinates among the UAVs. Then, a joint localization optimization method based on Kalman filtering is introduced. By fusing the localization information from the visual sensors located beneath the UAVs and using the baseline variation between the 2 UAVs as a reference, the method employs a recursive prediction approach to jointly optimize the self-estimated states and the collaborative SLAM state estimates. Experimental validation demonstrates a 31.6% improvement in localization accuracy in complex tasks compared to non-fusion localization method. Furthermore, to address the cooperative trajectory tracking problem of UAVs after system path planning, a baseline-predicting fuzzy Proportional-Integral-Derivative flight controller is designed. Compared to conventional methods, this controller takes into account delays and system oscillations, achieving better tracking performance and dynamic adjustments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Graph-based adaptive weighted fusion SLAM using multimodal data in complex underground spaces.
- Author
-
Lin, Xiaohu, Yang, Xin, Yao, Wanqiang, Wang, Xiqi, Ma, Xiongwei, and Ma, Bolin
- Subjects
- *
OPTICAL radar , *LIDAR , *STANDARD deviations , *UNDERGROUND areas , *SUBWAY tunnels - Abstract
Accurate and robust simultaneous localization and mapping (SLAM) is essential for autonomous exploration, unmanned transportation, and emergency rescue operations in complex underground spaces. However, the demanding conditions of underground spaces, characterized by poor lighting, weak textures, and high dust levels, pose substantial challenges to SLAM. To address this issue, we propose a graph-based adaptive weighted fusion SLAM (AWF-SLAM) for autonomous robots to achieve accurate and robust SLAM in complex underground spaces. First, a contrast limited adaptive histogram equalization (CLAHE) that combined adaptive gamma correction with weighting distribution (AGCWD) in hue, saturation, and value (HSV) space is proposed to enhance the brightness and contrast of visual images in underground spaces. Then, the performance of each sensor is evaluated using a consistency check based on the Mahalanobis distance to select the optimal configuration for specific conditions. Subsequently, we elaborate an adaptive weighting function model, which leverages the residuals from point cloud matching and the inner point rate of image matching. This model fuses data from light detection and ranging (LiDAR), inertial measurement unit (IMU), and cameras dynamically, enhancing the flexibility of the fusion process. Finally, multiple primitive features are adaptively fused within the factor graph optimization, utilizing a sliding window approach. Extensive experiments were conducted to check the performance of AWF-SLAM using a self-designed mobile robot in underground parking lots, excavated subway tunnels, and complex underground coal mine spaces based on reference trajectories and reconstructions provided by state-of-the-art methods. Satisfactorily, the root mean square error (RMSE) of trajectory translation is only 0.17 m, and the mean relative robustness distance between the point cloud maps reconstructed by AWF-SLAM and the reference point cloud map is lower than 0.09 m. These results indicate a substantial improvement in the accuracy and robustness of SLAM in complex underground spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Robust and correspondence-free point cloud registration: an extended approach with multiple hypotheses evaluation.
- Author
-
Di Lauro, Federica, Sorrenti, Domenico Giorgio, and Fontana, Simone
- Abstract
Point cloud registration is a fundamental problem in robotics, critical for tasks such as localization and mapping. Most approaches to this problem use feature-based techniques. However, these approaches have issues when dealing with unstructured environments where meaningful features are difficult to extract. Recently, an innovative global point cloud registration algorithm, PHASER, which does not rely on geometric features or point correspondences, has been introduced. It leverages Fourier transforms to identify the optimal rigid transform that maximizes cross-correlation between source and target point clouds. PHASER can also incorporate additional data channels, like LiDAR intensity, to enhance registration results. Because it does not rely on local features and because of its ability to exploit additional data, PHASER is particularly useful when dealing with very noisy point clouds or with many outliers. For this reason, we propose an extension to PHASER that considers multiple plausible rototranslation hypotheses. Our extended approach outperforms the original PHASER algorithm, especially in challenging scenarios where point clouds are widely separated. We validate its effectiveness on the DARPA SubT, and the Newer College datasets, showcasing its potential for improving registration accuracy in complex environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Enhancing SLAM efficiency: a comparative analysis of B-spline surface mapping and grid-based approaches.
- Author
-
Kanna, B. Rajesh, AV, Shreyas Madhav, Hemalatha, C. Sweetlin, and Rajagopal, Manoj Kumar
- Subjects
GRIDS (Cartography) ,ENVIRONMENTAL mapping ,AUTONOMOUS robots ,MOBILE robots ,GRID cells ,DEEP learning - Abstract
Environmental mapping serves as a crucial element in Simultaneous Localization and Mapping (SLAM) algorithms, playing a pivotal role in ensuring the accurate representation necessary for autonomous robot navigation guided by SLAM. Current SLAM systems predominantly rely on grid-based map representations, encountering challenges such as measurement discretization for cell fitting and grid map interpolation for online posture prediction. Splines present a promising alternative, capable of mitigating these issues while maintaining computational efficiency. This paper delves into the efficiency disparities between B-Spline surface mapping and discretized cell-based approaches, such as grid mapping, within indoor environments. B-Spline Online SLAM and FastSLAM, utilizing Rao-Blackwellized Particle Filter (RBPF), are employed to achieve range-based mapping of the unknown 2D environment. The system incorporates deep learning networks in the B-Spline curve estimation process to compute parameterizations and knot vectors. The research implementation utilizes the Intel Research Lab benchmark dataset to conduct a comprehensive qualitative and quantitative analysis of both approaches. The B-Spline surface approach demonstrates significantly superior performance, evidenced by low error metrics, including an average squared translational error of 0.0016 and an average squared rotational error of 1.137. Additionally, comparative analysis with Vision Benchmark Suite demonstrates robustness across different environments, highlighting the effectiveness of B-Spline SLAM for real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Distributed Multi-Robot SLAM Algorithm with Lightweight Communication and Optimization.
- Author
-
Han, Jin, Ma, Chongyang, Zou, Dan, Jiao, Song, Chen, Chao, and Wang, Jun
- Subjects
GRIDS (Cartography) ,GLOBAL optimization ,ROBOTS ,BANDWIDTHS ,ALGORITHMS - Abstract
Multi-robot SLAM (simultaneous localization and mapping) is crucial for the implementation of robots in practical scenarios. Bandwidth constraints significantly influence multi-robot SLAM systems, prompting a reliance on lightweight feature descriptors for robot cooperation in positioning tasks. Real-time map sharing among robots is also frequently ignored in such systems. Consequently, such algorithms are not feasible for autonomous multi-robot navigation tasks in the real world. Furthermore, the computation cost of the global optimization of multi-robot SLAM increases significantly in large-scale scenes. In this study, we introduce a novel distributed multi-robot SLAM framework incorporating sliding window-based optimization to mitigate computation loads and manage inter-robot loop closure constraints. In particular, we transmit a 2.5D grid map of the keyframe-based submap between robots to promote map consistency among robots and maintain bandwidth efficiency in data exchange. The proposed algorithm was evaluated in extensive experimental environments, and the results validate its effectiveness and superiority over other mainstream methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Map Construction and Positioning Method for LiDAR SLAM-Based Navigation of an Agricultural Field Inspection Robot.
- Author
-
Qu, Jiwei, Qiu, Zhinuo, Li, Lanyu, Guo, Kangquan, and Li, Dan
- Subjects
- *
OPTICAL radar , *LIDAR , *NAUTICAL charts , *GRAPH algorithms , *ENVIRONMENTAL mapping , *LOCALIZATION (Mathematics) - Abstract
In agricultural field inspection robots, constructing accurate environmental maps and achieving precise localization are essential for effective Light Detection And Ranging (LiDAR) Simultaneous Localization And Mapping (SLAM) navigation. However, navigating in occluded environments, such as mapping distortion and substantial cumulative errors, presents challenges. Although current filter-based algorithms and graph optimization-based algorithms are exceptionally outstanding, they exhibit a high degree of complexity. This paper aims to investigate precise mapping and localization methods for robots, facilitating accurate LiDAR SLAM navigation in agricultural environments characterized by occlusions. Initially, a LiDAR SLAM point cloud mapping scheme is proposed based on the LiDAR Odometry And Mapping (LOAM) framework, tailored to the operational requirements of the robot. Then, the GNU Image Manipulation Program (GIMP) is employed for map optimization. This approach simplifies the map optimization process for autonomous navigation systems and aids in converting the Costmap. Finally, the Adaptive Monte Carlo Localization (AMCL) method is implemented for the robot's positioning, using sensor data from the robot. Experimental results highlight that during outdoor navigation tests, when the robot operates at a speed of 1.6 m/s, the average error between the mapped values and actual measurements is 0.205 m. The results demonstrate that our method effectively prevents navigation mapping distortion and facilitates reliable robot positioning in experimental settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Proposal of UAV-SLAM-Based 3D Point Cloud Map Generation Method for Orchards Measurements.
- Author
-
Nishiwaki, Soki, Kondo, Haruki, Yoshida, Shuhei, and Emaru, Takanori
- Subjects
- *
GLOBAL Positioning System , *OPTICAL radar , *LIDAR , *STANDARD deviations , *POINT cloud - Abstract
This paper proposes a method for generating highly accurate point cloud maps of orchards using an unmanned aerial vehicle (UAV) equipped with light detection and ranging (LiDAR). The point cloud captured by the UAV-LiDAR was converted to a geographic coordinate system using a global navigation satellite system / inertial measurement unit (GNSS/IMU). The converted point cloud is then aligned with the simultaneous localization and mapping (SLAM) technique. As a result, a 3D model of an orchard is generated in a low-cost and easy-to-use manner for pesticide application with precision. The method of direct point cloud alignment with real-time kinematic-global navigation satellite system (RTK-GNSS) had a root mean square error (RMSE) of 42 cm between the predicted and true crop height values, primarily due to the effects of GNSS multipath and vibration of automated vehicles. Contrastingly, our method demonstrated better results, with RMSE of 5.43 cm and 2.14 cm in the vertical and horizontal axes, respectively. The proposed method for predicting crop location successfully achieved the required accuracy of less than 1 m with errors not exceeding 30 cm in the geographic coordinate system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. PGMF-VINS: Perpendicular-Based 3D Gaussian–Uniform Mixture Filter.
- Author
-
Deng, Wenqing, Yan, Zhe, Hu, Bo, Dong, Zhiyan, and Zhang, Lihua
- Subjects
- *
RESEARCH personnel , *AUTONOMOUS vehicles , *TRIANGULATION , *MONOCULARS , *ROBOTICS - Abstract
Visual–Inertial SLAM (VI-SLAM) has a wide range of applications spanning robotics, autonomous driving, AR, and VR due to its low-cost and high-precision characteristics. VI-SLAM is divided into localization and mapping tasks. However, researchers focus more on the localization task while the robustness of the mapping task is often ignored. To address this, we propose a map-point convergence strategy which explicitly estimates the position, the uncertainty, and the stability of the map point (SoM). As a result, the proposed method can effectively improve the quality of the whole map while ensuring state-of-the-art localization accuracy. The convergence strategy mainly consists of a perpendicular-based triangulation and 3D Gaussian–uniform mixture filter, and we name it PGMF, perpendicular-based 3D Gaussian–uniform mixture filter. The algorithm is extensively tested on open-source datasets, which shows the RVM (Ratio of Valid Map points) of our algorithm exhibits an average increase of 0.1471 compared to VINS-mono, with a variance reduction of 68.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Enhanced Visual SLAM for Collision-Free Driving with Lightweight Autonomous Cars.
- Author
-
Lin, Zhihao, Tian, Zhen, Zhang, Qi, Zhuang, Hanyang, and Lan, Jianglin
- Subjects
- *
VISUAL perception , *OPTICAL flow , *LYAPUNOV functions , *AUTONOMOUS vehicles , *QUADRATIC forms - Abstract
The paper presents a vision-based obstacle avoidance strategy for lightweight self-driving cars that can be run on a CPU-only device using a single RGB-D camera. The method consists of two steps: visual perception and path planning. The visual perception part uses ORBSLAM3 enhanced with optical flow to estimate the car's poses and extract rich texture information from the scene. In the path planning phase, the proposed method employs a method combining a control Lyapunov function and control barrier function in the form of a quadratic program (CLF-CBF-QP) together with an obstacle shape reconstruction process (SRP) to plan safe and stable trajectories. To validate the performance and robustness of the proposed method, simulation experiments were conducted with a car in various complex indoor environments using the Gazebo simulation environment. The proposed method can effectively avoid obstacles in the scenes. The proposed algorithm outperforms benchmark algorithms in achieving more stable and shorter trajectories across multiple simulated scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Slam loop closure detection algorithm based on MSA-SG.
- Author
-
Zhang, Heng, Zhang, Yihong, Liu, Yanli, Naixue Xiong, Neal, and Li, Yawei
- Subjects
- *
NAUTICAL charts , *FEATURE extraction , *ALGORITHMS - Abstract
This paper introduces an innovative method to improve loop closure detection within the domain of Simultaneous Localization And Mapping (SLAM) by integrating a Multi-Scale Attention and Semantic Guidance (MSA-SG) framework. In SLAM systems, accurate loop closure detection is essential for minimizing localization errors over time and ensuring the reliability of the constructed maps in robotics navigation through uncharted environments. Our proposed method utilizes EfficientNet-EA for robust feature extraction and introduces MSA-SG, a novel mechanism that synergizes multiscale attention with semantic guidance to focus on critical semantic features essential for loop closure detection. This approach ensures the prioritization of static environmental landmarks over transient and irrelevant objects, significantly enhancing the accuracy and efficiency of loop closure detection in complex and dynamic settings. Experimental validations on recognized datasets underscore the superiority of our approach, demonstrating marked improvements in precision, recall, and overall SLAM performance. This research highlights the significant benefits of leveraging semantic insights and attentional focus in advancing the capabilities of loop closure detection for SLAM applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. 面向矿井无人驾驶的IMU与激光雷达融合 SLAM 技术.
- Author
-
胡青松, 李敬定, 张元生, 李世银, and 孙彦景
- Abstract
Copyright of Journal of Mine Automation is the property of Industry & Mine Automation Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
19. LTA‐OM: Long‐term association LiDAR–IMU odometry and mapping.
- Author
-
Zou, Zuhao, Yuan, Chongjian, Xu, Wei, Li, Haotian, Zhou, Shunbo, Xue, Kaiwen, and Zhang, Fu
- Abstract
This paper focuses on the Light Detection and Ranging (LiDAR)–Inertial Measurement Unit (IMU) simultaneous localization and mapping (SLAM) problem: How to fuse the sensor measurement from the LiDAR and IMU to online estimate robot's poses and build a consistent map of the environment. This paper presents LTA‐OM: an efficient, robust, and accurate LiDAR SLAM system. Employing fast direct LiDAR‐inertial odometry (FAST‐LIO2) and Stable Triangle Descriptor as LiDAR–IMU odometry and the loop detection method, respectively, LTA‐OM is implemented to be functionally complete, including loop detection and correction, false‐positive loop closure rejection, long‐term association (LTA) mapping, and multisession localization and mapping. One novelty of this paper is the real‐time LTA mapping, which exploits the direct scan‐to‐map registration of FAST‐LIO2 and employs the corrected history map to provide direct global constraints to the LIO mapping process. LTA mapping also has the notable advantage of achieving drift‐free odometry at revisit places. Besides, a multisession mode is designed to allow the user to store the current session's results, including the corrected map points, optimized odometry, and descriptor database for future sessions. The benefits of this mode are additional accuracy improvement and consistent map stitching, which is helpful for life‐long mapping. Furthermore, LTA‐OM has valuable features for robot control and path planning, including high‐frequency and real‐time odometry, driftless odometry at revisit places, and fast loop closing convergence. LTA‐OM is versatile as it is applicable to both multiline spinning and solid‐state LiDARs, mobile robots and handheld platforms. In experiments, we exhaustively benchmark LTA‐OM and other state‐of‐the‐art LiDAR systems with 18 data sequences. The results show that LTA‐OM steadily outperforms other systems regarding trajectory accuracy, map consistency, and time consumption. The robustness of LTA‐OM is validated in a challenging scene—a multilevel building having similar structures at different levels. To demonstrate our system, we created a video which can be found on https://youtu.be/DVwppEKlKps. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. AstroSLAM: Autonomous monocular navigation in the vicinity of a celestial small body—Theory and experiments.
- Author
-
Dor, Mehregan, Driver, Travis, Getzandanner, Kenneth, and Tsiotras, Panagiotis
- Subjects
- *
MICROSPACECRAFT , *NAUTICAL astronomy , *PLANETARY systems , *UNITS of measurement , *SPACE vehicles - Abstract
We propose AstroSLAM, a standalone vision-based solution for autonomous online navigation around an unknown celestial target small body. AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine. By combining sensor fusion with orbital motion priors, we achieve improved performance over a baseline SLAM solution and outperform state-of-the-art methods predicated on pre-integrated inertial measurement unit factors. We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics—RelDyn—factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body. We demonstrate AstroSLAM's performance and compare against the state-of-the-art methods using both real legacy mission imagery and trajectory data courtesy of NASA's Planetary Data System, as well as real in-lab imagery data produced on a 3 degree-of-freedom spacecraft simulator test-bed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Surface Reconstruction from SLAM-Based Point Clouds: Results from the Datasets of the 2023 SIFET Benchmark.
- Author
-
Matellon, Antonio, Maset, Eleonora, Beinat, Alberto, and Visintini, Domenico
- Subjects
- *
POINT cloud , *BUILDING information modeling , *SURFACE reconstruction , *GEOMATICS , *OPTICAL scanners , *LASERS - Abstract
The rapid technological development that geomatics has been experiencing in recent years is leading to increasing ease, productivity and reliability of three-dimensional surveys, with portable laser scanner systems based on Simultaneous Localization and Mapping (SLAM) technology, gradually replacing traditional techniques in certain applications. Although the performance of such systems in terms of point cloud accuracy and noise level has been deeply investigated in the literature, there are fewer works about the evaluation of their use for surface reconstruction, cartographic production, and as-built Building Information Model (BIM) creation. The objective of this study is to assess the suitability of SLAM devices for surface modeling in an urban/architectural environment. To this end, analyses are carried out on the datasets acquired by three commercial portable laser scanners in the context of a benchmark organized in 2023 by the Italian Society of Photogrammetry and Topography (SIFET). In addition to the conventional point cloud assessment, we propose a comparison between the reconstructed mesh and a ground-truth model, employing a model-to-model methodology. The outcomes are promising, with the average distance between models ranging from 0.2 to 1.4 cm. However, the surfaces modeled from the terrestrial laser scanning point cloud show a level of detail that is still unmatched by SLAM systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. AWLC: Adaptive Weighted Loop Closure for SLAM with Multi-Modal Sensor Fusion.
- Author
-
Zhou, Guangli, Huang, Fei, Liu, Wenbing, Zhang, Yuxuan, Wei, Hanbing, and Hou, Xiaoqin
- Subjects
- *
POINT cloud , *DEEP learning , *LIDAR , *DAYLIGHT , *ALGORITHMS - Abstract
The present prevailing loop closure detection algorithm is mainly applicable for simultaneous localization and mapping (SLAM). Its effectiveness is contingent upon environmental conditions, which can fluctuate due to variations in lighting or the surrounding scenario. Vision-based algorithms, while adept during daylight hours, may falter in nocturnal settings. Conversely, lidar methods hinge on the sparsity of the given scenario. This paper proposes an algorithm that comprehensively utilizes lidar and image features to assign weighted factors for loop closure detection based on multi-modal sensor fusion. First, we use k -means clustering to produce a point cloud spatial global bag of words. Second, an improved deep learning method is used to train feature descriptors of images while scan context is also used to detect candidate point cloud features. After that, different feature-weighted factors are assigned for homologous feature descriptors. Finally, the detection result related to the maximum weight factor is designated to the optimal loop closure. The adaptive weighted loop closure (AWLC) algorithm we proposed inherits the advantages of different loop closure detection algorithms and hence it is accurate and robust. The AWLC method is compared with popular loop detection algorithms in different datasets. Experiments show that the AWLC can maintain the effectiveness and robustness of detection even at night or in highly dynamic complex environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. 基于多源信息融合的果园移动机器人自主导航系统研究进展.
- Author
-
李小明 and 冯青春
- Abstract
Fruit industry is one of the important economic pillars of China’s agriculture and rural areas. The current orchard production management level,especially the mechanization and information level,is relatively backward. Orchard mobile robot based on multi-source information fusion can realize stable and high-precision autonomous navigation in complex environment,provide intelligent and efficient autonomous navigation means for orchard mobile platform,and strongly support the construction of smart orchard. By analyzing the research progress of orchard mobile robot autonomous navigation system based on multi-source information fusion,this paper proposes to combine the actual complex and diverse working conditions of orchard,focus on key technologies such as positioning and mapping,path planning and decision control strategy,and based on the existing mobile platform,study the multi-source sensor information fusion strategy to achieve autonomous navigation in complex environment. The performance of the autonomous navigation system is verified by field tests. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A photogrammetric approach for real‐time visual SLAM applied to an omnidirectional system.
- Author
-
Garcia, Thaisa Aline Correia, Tommaselli, Antonio Maria Garcia, Castanheiro, Letícia Ferrari, and Campos, Mariana Batista
- Abstract
The problem of sequential estimation of the exterior orientation of imaging sensors and the three‐dimensional environment reconstruction in real time is commonly known as visual simultaneous localisation and mapping (vSLAM). Omnidirectional optical sensors have been increasingly used in vSLAM solutions, mainly for providing a wider view of the scene, allowing the extraction of more features. However, dealing with unmodelled points in the hyperhemispherical field poses challenges, mainly due to the complex lens geometry entailed in the image formation process. To address these challenges, the use of rigorous photogrammetric models that appropriately handle the geometry of fisheye lens cameras can overcome these challenges. Thus, this study presents a real‐time vSLAM approach for omnidirectional systems adapting ORB‐SLAM with a rigorous projection model (equisolid‐angle). The implementation was conducted on the Nvidia Jetson TX2 board, and the approach was evaluated using hyperhemispherical images captured by a dual‐fisheye camera (Ricoh Theta S) embedded into a mobile backpack platform. The trajectory covered a distance of 140 m, with the approach demonstrating accuracy better than 0.12 m at the beginning and achieving metre‐level accuracy at the end of the trajectory. Additionally, we compared the performance of our proposed approach with a generic model for fisheye lens cameras. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. 基于多类别特征点匹配的紧耦合激光惯性里程计.
- Author
-
李春海, 苏昭宇, 陈 倩, 唐 欣, and 李晓欢
- Abstract
Aiming at the problems of low precision of laser odometer due to motion distortion of LiDAR data and sparse ground sampling data, a tightly coupled IMU odometer method based on multi-class feature point matching is proposed in this paper. In order to improve the quality of LiDAR data, it firstly starts with the original point cloud data and then uses IMU data to perform linear interpolation to correct the distorted point cloud in each frame of LiDAR data. Secondly, after distortion correction, it performs a 2D grid projection on the point cloud. According to the average minimum height of each grid and its adjacent grids, the point cloud in the grid is divided into ground points and non-ground points, using a dual threshold. Then, the non-ground points are further divided to obtain multi-class feature points according to linearity, flatness, curvature and other local features. Thirdly, it models the tight coupling of IMU for multi-class feature point matching. Considering that the original LiDAR observation error cannot provide high-precision gravity vector estimation, it introduces IMU state estimation, builds odometer constraint error function, and makes a further constraint on the estimation of gravity vector. Thus, the precision of laser odometer is improved effectively. Finally, an IMU tightly coupled laser odometer based on multi-class feature point matching is designed based on LeGO-LOAM framework, and the verification system is completed. Experimental results show that this method can effectively suppress the drift of gravity vector and improve the precision of laser odometer. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. KISS—Keep It Static SLAMMOT—The Cost of Integrating Moving Object Tracking into an EKF-SLAM Algorithm.
- Author
-
Mandel, Nicolas, Kompe, Nils, Gerwin, Moritz, and Ernst, Floris
- Subjects
- *
TRACKING algorithms , *RESEARCH personnel , *DYNAMIC models , *ROBOTICS , *ALGORITHMS - Abstract
The treatment of moving objects in simultaneous localization and mapping (SLAM) is a key challenge in contemporary robotics. In this paper, we propose an extension of the EKF-SLAM algorithm that incorporates moving objects into the estimation process, which we term KISS. We have extended the robotic vision toolbox to analyze the influence of moving objects in simulations. Two linear and one nonlinear motion models are used to represent the moving objects. The observation model remains the same for all objects. The proposed model is evaluated against an implementation of the state-of-the-art formulation for moving object tracking, DATMO. We investigate increasing numbers of static landmarks and dynamic objects to demonstrate the impact on the algorithm and compare it with cases where a moving object is mistakenly integrated as a static landmark (false negative) and a static landmark as a moving object (false positive). In practice, distances to dynamic objects are important, and we propose the safety–distance–error metric to evaluate the difference between the true and estimated distances to a dynamic object. The results show that false positives have a negligible impact on map distortion and ATE with increasing static landmarks, while false negatives significantly distort maps and degrade performance metrics. Explicitly modeling dynamic objects not only performs comparably in terms of map distortion and ATE but also enables more accurate tracking of dynamic objects with a lower safety–distance–error than DATMO. We recommend that researchers model objects with uncertain motion using a simple constant position model, hence we name our contribution Keep it Static SLAMMOT. We hope this work will provide valuable data points and insights for future research into integrating moving objects into SLAM algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Integration of a Mobile Laser Scanning System with a Forest Harvester for Accurate Localization and Tree Stem Measurements.
- Author
-
Faitli, Tamás, Hyyppä, Eric, Hyyti, Heikki, Hakala, Teemu, Kaartinen, Harri, Kukko, Antero, Muhojoki, Jesse, and Hyyppä, Juha
- Subjects
- *
OPTICAL scanners , *SCANNING systems , *LASER beams , *ARBORETUMS , *FOREST canopies , *AIRBORNE lasers - Abstract
Automating forest machines to optimize the forest value chain requires the ability to map the surroundings of the machine and to conduct accurate measurements of nearby trees. In the near-to-medium term, integrating a forest harvester with a mobile laser scanner system may have multiple applications, including real-time assistance of the harvester operator using laser-scanner-derived tree measurements and the collection of vast amounts of training data for large-scale airborne laser scanning-based surveys at the individual tree level. In this work, we present a comprehensive processing flow for a mobile laser scanning (MLS) system mounted on a forest harvester starting from the localization of the harvester under the forest canopy followed by accurate and automatic estimation of tree attributes, such as diameter at breast height (DBH) and stem curve. To evaluate our processing flow, we recorded and processed MLS data from a commercial thinning operation on three test strips with a total driven length ranging from 270 to 447 m in a managed Finnish spruce forest stand containing a total of 658 reference trees within a distance of 15 m from the harvester trajectory. Localization reference was obtained by a robotic total station, while reference tree attributes were derived using a high-quality handheld laser scanning system. As some applications of harvester-based MLS require real-time capabilities while others do not, we investigated the positioning accuracy both for real-time localization of the harvester and after the optimization of the full trajectory. In the real-time positioning mode, the absolute localization error was on average 2.44 m, while the corresponding error after the full optimization was 0.21 m. Applying our automatic stem diameter estimation algorithm for the constructed point clouds, we measured DBH and stem curve with a root-mean-square error (RMSE) of 3.2 cm and 3.6 cm, respectively, while detecting approximately 90% of the reference trees with DBH > 20 cm that were located within 15 m from the harvester trajectory. To achieve these results, we demonstrated a distance-adjusted bias correction method mitigating diameter estimation errors caused by the high beam divergence of the laser scanner used. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Assessment of NavVis VLX and BLK2GO SLAM Scanner Accuracy for Outdoor and Indoor Surveying Tasks.
- Author
-
Gharineiat, Zahra, Tarsha Kurdi, Fayez, Henny, Krish, Gray, Hamish, Jamieson, Aaron, and Reeves, Nicholas
- Subjects
- *
OPTICAL radar , *LIDAR , *CLOUDINESS , *POINT cloud , *ACQUISITION of data - Abstract
The Simultaneous Localization and Mapping (SLAM) scanner is an easy and portable Light Detection and Ranging (LiDAR) data acquisition device. Its main output is a 3D point cloud covering the scanned scene. Regarding the importance of accuracy in the survey domain, this paper aims to assess the accuracy of two SLAM scanners: the NavVis VLX and the BLK2GO scanner. This assessment is conducted for both outdoor and indoor environments. In this context, two types of reference data were used: the total station (TS) and the static scanner Z+F Imager 5016. To carry out the assessment, four comparisons were tested: cloud-to-cloud, cloud-to-mesh, mesh-to-mesh, and edge detection board assessment. However, the results of the assessments confirmed that the accuracy of indoor SLAM scanner measurements (5 mm) was greater than that of outdoor ones (between 10 mm and 60 mm). Moreover, the comparison of cloud-to-cloud provided the best accuracy regarding direct accuracy measurement without manipulations. Finally, based on the high accuracy, scanning speed, flexibility, and the accuracy differences between tested cases, it was confirmed that SLAM scanners are effective tools for data acquisition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. LCCD-SLAM: A Low-Bandwidth Centralized Collaborative Direct Monocular SLAM for Multi-Robot Collaborative Mapping.
- Author
-
Liu, Quan-Pan, Wang, Zheng-Jie, and Tan, Yun-Fei
- Subjects
- *
SLAM (Robotics) , *ARTIFICIAL intelligence , *VISUAL odometry , *MICRO air vehicles , *STANDARD deviations , *POSE estimation (Computer vision) - Published
- 2024
- Full Text
- View/download PDF
30. Low-Cost Real-Time Localisation for Agricultural Robots in Unstructured Farm Environments.
- Author
-
Liu, Chongxiao and Nguyen, Bao Kha
- Subjects
AGRICULTURAL robots ,GLOBAL Positioning System ,AGRICULTURE ,MULTISENSOR data fusion ,KALMAN filtering - Abstract
Agricultural robots have demonstrated significant potential in enhancing farm operational efficiency and reducing manual labour. However, unstructured and complex farm environments present challenges to the precise localisation and navigation of robots in real time. Furthermore, the high costs of navigation systems in agricultural robots hinder their widespread adoption in cost-sensitive agricultural sectors. This study compared two localisation methods that use the Error State Kalman Filter (ESKF) to integrate data from wheel odometry, a low-cost inertial measurement unit (IMU), a low-cost real-time kinematic global navigation satellite system (RTK-GNSS) and the LiDAR-Inertial Odometry via Smoothing and Mapping (LIO-SAM) algorithm using a low-cost IMU and RoboSense 16-channel LiDAR sensor. These two methods were tested on unstructured farm environments for the first time in this study. Experiment results show that the ESKF sensor fusion method without a LiDAR sensor could save 36% of the cost compared to the method that used the LIO-SAM algorithm while maintaining high accuracy for farming applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Rapid SLAM Method for Star Surface Rover in Unstructured Space Environments.
- Author
-
Zhang, Zhengpeng, Cheng, Yan, Bu, Lijing, and Ye, Jiayan
- Subjects
SPACE environment ,FEATURE extraction ,LUNAR surface ,STAR maps (Astronomy) ,SIMULATION methods & models - Abstract
The space environment is characterized by unstructured features, sparsity, and poor lighting conditions. The difficulty in extracting features from the visual frontend of traditional SLAM methods results in poor localization and time-consuming issues. This paper proposes a rapid and real-time localization and mapping method for star chart surveyors in unstructured space environments. Improved localization is achieved using multiple sensor fusion to sense the space environment. We replaced the traditional feature extraction module with an enhanced SuperPoint feature extraction network to tackle the challenge of challenging feature extraction in unstructured space environments. By dynamically adjusting detection thresholds, we achieved uniform detection and description of image keypoints, ultimately resulting in robust and accurate feature association information. Furthermore, we minimized redundant information to achieve precise positioning with high efficiency and low power consumption. We established a star surface rover simulation system and created simulated environments resembling Mars and the lunar surface. Compared to the LVI-SAM system, our method achieved a 20% improvement in localization accuracy for lunar scenarios. In Mars scenarios, our method achieved a positioning accuracy of 0.716 m and reduced runtime by 18.682 s for the same tasks. Our approach exhibits higher localization accuracy and lower power consumption in unstructured space environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Graph-based robust 3D point cloud map merging approach for large scale.
- Author
-
Gui, Linqiu, Zeng, Chunnian, Luo, Jie, Wang, Xiaofeng, Yang, Xu, and Zhong, Shengshi
- Abstract
Autonomous driving is a crucial area of research and a key focus for industrial advancement, with the advancement of high-level autonomous driving technology depending significantly on creating precise 3D point cloud maps (PCM \ PCMs) of the driving environment. The challenge arises in constructing a comprehensive map in a single attempt, particularly in scenarios with constrained environments or limited system hardware resources. Consequently, the need to construct the PCM in multiple local areas through either multi-robot collaborative SLAM or time-sharing single-robot SLAM becomes imperative. The fusion of these local maps into a globally consistent map is achieved through overlapping area matching and pose graph optimization. However, error matching can pose a significant obstacle to graph optimization, leading to a notable reduction in system performance. Therefore, enhancing the robustness of these processes in the presence of false-positive matches is crucial. This paper introduces a graph-based robust 3D PCM merging approach for large-scale applications. Our system utilizes a classical two-step matching method to find the matching pairs between sub-maps: coarse matching with global descriptors and fine matching through point cloud registration. We apply spatial consistency detection to analyze the matches and determine the variance of residuals through the error propagation of the Special Euclidean Group. Based on the above and the clustering of matching pairs, we proposed a local and global two-step outlier removal module to filter out error matches, thereby improving the robustness of the PCM merging algorithm. Experimental results using KITTI and self-collected data demonstrate the effectiveness of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
33. Cross transformer for LiDAR-based loop closure detection.
- Author
-
Zheng, Rui, Ren, Yang, Zhou, Qi, Ye, Yibin, and Zeng, Hui
- Abstract
Loop closure detection, also known as place recognition, a key component of simultaneous localization and mapping (SLAM) systems, aims to recognize previously visited locations and reduce the accumulated drift error caused by odometry. Current vision-based methods are susceptible to variations in illumination and perspective, limiting their generalization ability and robustness. Thus, in this paper, we propose CrossT-Net (Cross Transformer Net), a novel cross-attention based loop closure detection network for LiDAR. CrossT-Net directly estimates the similarity between two frames by leveraging multi-class information maps, including range, intensity, and normal maps, to comprehensively characterize environmental features. A Siamese Encoder Net with shared parameters extracts frame features, and a Cross Transformer module captures intra-frame context and inter-frame correlations through self-attention and cross-attention mechanisms. In the final stage, an Overlap Estimation Module predicts the point cloud overlap between two frames. Experimental results on several benchmark datasets demonstrate that our proposed method outperforms existing methods in precision and recall, and exhibits strong generalization performance in different road environments. The implementation of our approach is available at: . [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
34. Localization and Mapping Based on Multi-feature and Multi-sensor Fusion.
- Author
-
Li, Danni, Zhao, Yibing, Wang, Weiqi, and Guo, Lie
- Subjects
- *
FEATURE extraction , *MULTISENSOR data fusion , *REAL-time computing , *POINT cloud , *LIDAR - Abstract
Simultaneous Localization and Mapping (SLAM) is the foundation for high-precision localization, environmental awareness, and autonomous decision-making of autonomous vehicles. It has developed rapidly, but there are still challenges such as sensor errors, data fusion, and real-time computing. This paper proposes an optimization-based fusion algorithm that integrates IMU data, visual data and LiDAR data to construct a high-frequency visual-inertial odometry. The odometry is employed to obtain the relative pose transformation during the LiDAR data acquisition process, and eliminate the distortion of the point cloud by interpolation. By utilizing the local curvatures, some edge and plane features are extracted by LiDAR after removing the distortion, which are further combined with local map alignment to reconstruct the LiDAR constrains. In addition, the LiDAR odometer can be obtained through the initial values provided by high-frequency visual-inertial odometry. To address the cumulative error in odometers, adjacent keyframe and multi descriptor fusion loop constraints are combined to construct back-end optimization constraints, solving for high-accuracy localization results and constructing a 3D point cloud map of the surroundings. Compared with some classical algorithm, results show that the accuracy of this paper's algorithm is better than the laser SLAM method and the multi-sensor fusion SLAM method. Besides, the laser-assisted multi-feature visual-inertial odometry localization accuracy is also better than that of the single-feature visual-inertial odometry. In summary, the newly proposed SLAM method can largely improve the accuracy of odometry in real traffic scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. EgoHDM: A Real-time Egocentric-Inertial Human Motion Capture, Localization, and Dense Mapping System.
- Author
-
Yin, Handi, Liu, Bonan, Kaufmann, Manuel, He, Jinhao, Christen, Sammy, Song, Jie, and Hui, Pan
- Abstract
We present EgoHDM, an online egocentric-inertial human motion capture (mocap), localization, and dense mapping system. Our system uses 6 inertial measurement units (IMUs) and a commodity head-mounted RGB camera. EgoHDM is the first human mocap system that offers dense scene mapping in near real-time. Further, it is fast and robust to initialize and fully closes the loop between physically plausible map-aware global human motion estimation and mocap-aware 3D scene reconstruction. To achieve this, we design a tightly coupled mocap-aware dense bundle adjustment and physics-based body pose correction module leveraging a local body-centric elevation map. The latter introduces a novel terrain-aware contact PD controller, which enables characters to physically contact the given local elevation map thereby reducing human floating or penetration. We demonstrate the performance of our system on established synthetic and real-world benchmarks. The results show that our method reduces human localization, camera pose, and mapping accuracy error by 41%, 71%, 46%, respectively, compared to the state of the art. Our qualitative evaluations on newly captured data further demonstrate that EgoHDM can cover challenging scenarios in non-flat terrain including stepping over stairs and outdoor scenes in the wild. Our project page: https://handiyin.github.io/EgoHDM/ [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. MULO: LiDAR Odometry via MUlti-Landmarks Joint Optimization.
- Author
-
Liu, Jun, He, Zhengnan, Zhao, Xiaoyu, Hu, Jun, Cheng, Shuai, and Liu, Wei
- Abstract
At the present stage, LiDAR-based SLAM solutions are dominated by ICP and its variants, while the BA optimization method that can improve the pose consistency has received little attention. Therefore, we propose MULO, a low-drift and robust LiDAR odometry using BA optimization with plane and cylinder landmarks. In the front-end, a coarse-to-fine direct pose estimation method provides the prior pose to the back-end. And in the back-end, we propose a novel three-stage landmark extraction and data association strategy for plane and cylinder, which is robust and efficient. Meanwhile, a stable minimum parameterization method for cylinder landmarks is proposed for optimization. In order to fully utilize the LiDAR information at long distances, we propose a new sliding window structure consisting of a TinyWindow and a SuperWindow. Finally, we jointly optimize the two kinds of landmarks and scan poses in this sliding window. The proposed system is evaluated on public dataset and our dataset, and experimental results show that our system is competitive compared with the state-of-the-art LiDAR odometrys. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Pose‐graph underwater simultaneous localization and mapping for autonomous monitoring and 3D reconstruction by means of optical and acoustic sensors.
- Author
-
Bucci, Alessandro, Ridolfi, Alessandro, and Allotta, Benedetto
- Subjects
GLOBAL Positioning System ,SLAM (Robotics) ,NAUTICAL charts ,UNDERWATER navigation ,OPTICAL sensors - Abstract
Modern mobile robots require precise and robust localization and navigation systems to achieve mission tasks correctly. In particular, in the underwater environment, where Global Navigation Satellite Systems cannot be exploited, the development of localization and navigation strategies becomes more challenging. Maximum A Posteriori (MAP) strategies have been analyzed and tested to increase navigation accuracy and take into account the entire history of the system state. In particular, a sensor fusion algorithm relying on a MAP technique for Simultaneous Localization and Mapping (SLAM) has been developed to fuse information coming from a monocular camera and a Doppler Velocity Log (DVL) and to consider the landmark points in the navigation framework. The proposed approach can guarantee to simultaneously locate the vehicle and map the surrounding environment with the information extracted from the images acquired by a bottom‐looking optical camera. Optical sensors can provide constraints between the vehicle poses and the landmarks belonging to the observed scene. The DVL measurements have been employed to solve the unknown scale factor and to guarantee the correct vehicle localization even in the absence of visual features. Furthermore, to evaluate the mapping capabilities of the SLAM algorithm, the obtained point cloud is elaborated with a Poisson reconstruction method to obtain a smooth seabed surface. After validating the proposed solution through realistic simulations, an experimental campaign at sea was conducted in Stromboli Island (Messina), Italy, where both the navigation and the mapping performance have been evaluated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Navigation benchmarking for autonomous mobile robots in hospital environment
- Author
-
Cristiana Rondoni, Francesco Scotto di Luzio, Christian Tamantini, Nevio Luigi Tagliamonte, Marcello Chiurazzi, Gastone Ciuti, and Loredana Zollo
- Subjects
Robot navigation ,Hospital environment ,Benchmarking method ,SLAM ,Medical mobile robots ,Medicine ,Science - Abstract
Abstract The widespread adoption of robotic technologies in healthcare has opened up new perspectives for enhancing accuracy, effectiveness and quality of medical procedures and patients’ care. Special attention has been given to the reliability of robots when operating in environments shared with humans and to the users’ safety, especially in case of mobile platforms able to navigate autonomously. From the analysis of the literature, it emerges that navigation tests carried out in a hospital environment are preliminary and not standardized. This paper aims to overcome the limitations in the assessment of autonomous mobile robots navigating in hospital environments by proposing: (i) a structured benchmarking protocol composed of a set of standardized tests, taking into account conditions with increasing complexity, (ii) a set of quantitative performance metrics. The proposed approach has been used in a realistic setting to assess the performance of two robotic platforms, namely HOSBOT and TIAGo, with different technical features and developed for different applications in a clinical scenario.
- Published
- 2024
- Full Text
- View/download PDF
39. A low-cost robotic system for simultaneous localization and mapping
- Author
-
Ayman Hamdy Kassem and Muhammad Asem
- Subjects
SLAM ,ROS ,Raspberry Pi ,Probabilistic robotics ,Particle filters ,Mobile robot ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Abstract This paper presents a low-cost system for simultaneous localization and mapping (SLAM) for unknown indoor environments. The system is based on a low-cost mobile-robot platform. The low-cost mobile robot is designed and fabricated in our control laboratory. The Rao-Blackwellized particle filter algorithm is used for SLAM computations, Xbox 360 Kinect module is utilized for stereo-camera imaging, and a Linux-based microcomputer (Raspberry Pi3) was used as the main onboard processing unit. An Arduino board is used to control the DC motors for mobile robot wheels. Raspberry Pi unit was wirelessly connected to a ground station machine that processes the information sent by the robot to build the environment map and estimate its pose. ROS (Robot Operating System) is used for map visualization, data-handling, and communication between different software nodes. The system has been tested virtually on a simulator and in real indoor environments and has successfully identified objects greater than 30 cm × 30 cm × 30 cm and added it to the map. It also shows promising capability to work autonomous missions independently without aid from any external sensors and with a fraction of the cost of similar systems based on Lidars.
- Published
- 2024
- Full Text
- View/download PDF
40. Simultaneous localization and mapping (SLAM)-based robot localization and navigation algorithm
- Author
-
Junfu Qiao, Jinqin Guo, and Yongwei Li
- Subjects
SLAM ,Kalman filter ,Robot ,Water supply for domestic and industrial purposes ,TD201-500 - Abstract
Abstract This research paper presents a comprehensive study of the simultaneous localization and mapping (SLAM) algorithm for robot localization and navigation in unknown environments. The SLAM algorithm is a widely used approach for building a map of an environment and estimating the robot’s position within it, which is especially useful in dynamic and unstructured environments. The paper discusses various SLAM techniques, including the Kalman filter (KF) and GraphSLAM algorithms, and their use in probabilistic estimation of the robot’s position and orientation. The paper also explores different path-planning techniques that can be used with the map created by the SLAM algorithm to generate collision-free paths for the robot to navigate toward its goal. The paper also discusses recent advances in deep learning-based SLAM algorithms and their applications in indoor navigation with ORB and RGB-D cameras. The research concludes that SLAM-based robot localization and navigation algorithms are a promising approach for robots navigating in unstructured environments and present various opportunities for future research.
- Published
- 2024
- Full Text
- View/download PDF
41. NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map.
- Author
-
Dai, Changshuai, Han, Ting, Luo, Yang, Wang, Mengyi, Cai, Guorong, Su, Jinhe, Gong, Zheng, and Liu, Niansheng
- Subjects
- *
IMAGE sensors , *FEATURE selection , *CLOSED loop systems , *PROBLEM solving , *CALIBRATION - Abstract
With the advancement of computer vision and sensor technologies, many multi-camera systems are being developed for the control, planning, and other functionalities of unmanned systems or robots. The calibration of multi-camera systems determines the accuracy of their operation. However, calibration of multi-camera systems without overlapping parts is inaccurate. Furthermore, the potential of feature matching points and their spatial extent in calculating the extrinsic parameters of multi-camera systems has not yet been fully realized. To this end, we propose a multi-camera calibration algorithm to solve the problem of the high-precision calibration of multi-camera systems without overlapping parts. The calibration of multi-camera systems is simplified to the problem of solving the transformation relationship of extrinsic parameters using a map constructed by multiple cameras. Firstly, the calibration environment map is constructed by running the SLAM algorithm separately for each camera in the multi-camera system in closed-loop motion. Secondly, uniformly distributed matching points are selected among the similar feature points between the maps. Then, these matching points are used to solve the transformation relationship between the multi-camera external parameters. Finally, the reprojection error is minimized to optimize the extrinsic parameter transformation relationship. We conduct comprehensive experiments in multiple scenarios and provide results of the extrinsic parameters for multiple cameras. The results demonstrate that the proposed method accurately calibrates the extrinsic parameters for multiple cameras, even under conditions where the main camera and auxiliary cameras rotate 180°. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Advancing Global Pose Refinement: A Linear, Parameter-Free Model for Closed Circuits via Quaternion Interpolation.
- Author
-
Benevides, Rubens Antônio Leite, dos Santos, Daniel Rodrigues, Pavan, Nadisson Luis, and Veiga, Luis Augusto Koenig
- Subjects
- *
OPTICAL scanners , *CIRCUIT complexity , *POINT cloud , *LEAST squares , *QUATERNIONS , *POSE estimation (Computer vision) - Abstract
Global pose refinement is a significant challenge within Simultaneous Localization and Mapping (SLAM) frameworks. For LIDAR-based SLAM systems, pose refinement is integral to correcting drift caused by the successive registration of 3D point clouds collected by the sensor. A divergence between the actual and calculated platform paths characterizes this error. In response to this challenge, we propose a linear, parameter-free model that uses a closed circuit for global trajectory corrections. Our model maps rotations to quaternions and uses Spherical Linear Interpolation (SLERP) for transitions between them. The intervals are established by the constraint set by the Least Squares (LS) method on rotation closure and are proportional to the circuit's size. Translations are globally adjusted in a distinct linear phase. Additionally, we suggest a coarse-to-fine pairwise registration method, integrating Fast Global Registration and Generalized ICP with multiscale sampling and filtering. The proposed approach is tested on three varied datasets of point clouds, including Mobile Laser Scanners and Terrestrial Laser Scanners. These diverse datasets affirm the model's effectiveness in 3D pose estimation, with substantial pose differences and efficient pose optimization in larger circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments.
- Author
-
Liu, Yanjie, Wang, Chao, Wu, Heng, and Wei, Yanlong
- Subjects
- *
POINT cloud , *LIDAR , *PROBLEM solving , *CURVATURE , *ALGORITHMS - Abstract
SLAM (simultaneous localization and mapping) is essential for accurate positioning and reasonable path planning in outdoor mobile robots. LiDAR SLAM is currently the dominant method for creating outdoor environment maps. However, the mainstream LiDAR SLAM algorithms have a single point cloud feature extraction process at the front end, and most of the loop closure detection at the back end is based on RNN (radius nearest neighbor). This results in low mapping accuracy and poor real-time performance. To solve this problem, we integrated the functions of point cloud segmentation and Scan Context loop closure detection based on the advanced LiDAR-inertial SLAM algorithm (LIO-SAM). First, we employed range images to extract ground points from raw LiDAR data, followed by the BFS (breadth-first search) algorithm to cluster non-ground points and downsample outliers. Then, we calculated the curvature to extract planar points from ground points and corner points from clustered segmented non-ground points. Finally, we used the Scan Context method for loop closure detection to improve back-end mapping speed and reduce odometry drift. Experimental validation with the KITTI dataset verified the advantages of the proposed method, and combined with Walking, Park, and other datasets comprehensively verified that the proposed method had good accuracy and real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Navigation benchmarking for autonomous mobile robots in hospital environment.
- Author
-
Rondoni, Cristiana, Scotto di Luzio, Francesco, Tamantini, Christian, Tagliamonte, Nevio Luigi, Chiurazzi, Marcello, Ciuti, Gastone, and Zollo, Loredana
- Abstract
The widespread adoption of robotic technologies in healthcare has opened up new perspectives for enhancing accuracy, effectiveness and quality of medical procedures and patients’ care. Special attention has been given to the reliability of robots when operating in environments shared with humans and to the users’ safety, especially in case of mobile platforms able to navigate autonomously. From the analysis of the literature, it emerges that navigation tests carried out in a hospital environment are preliminary and not standardized. This paper aims to overcome the limitations in the assessment of autonomous mobile robots navigating in hospital environments by proposing: (i) a structured benchmarking protocol composed of a set of standardized tests, taking into account conditions with increasing complexity, (ii) a set of quantitative performance metrics. The proposed approach has been used in a realistic setting to assess the performance of two robotic platforms, namely HOSBOT and TIAGo, with different technical features and developed for different applications in a clinical scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. RGB-D visual odometry by constructing and matching features at superpixel level.
- Author
-
Yang, Meiyi, Xiong, Junlin, and Li, Youfu
- Abstract
Visual odometry (VO) is a key technology for estimating camera motion from captured images. In this paper, we propose a novel RGB-D visual odometry by constructing and matching features at the superpixel level that represents better adaptability in different environments than state-of-the-art solutions. Superpixels are content-sensitive and perform well in information aggregation. They could thus characterize the complexity of the environment. Firstly, we designed the superpixel-based feature SegPatch and its corresponding 3D representation MapPatch. By using the neighboring information, SegPatch robustly represents its distinctiveness in various environments with different texture densities. Due to the inclusion of depth measurement, the MapPatch constructs the scene structurally. Then, the distance between SegPatches is defined to characterize the regional similarity. We use the graph search method in scale space for searching and matching. As a result, the accuracy and efficiency of matching process are improved. Additionally, we minimize the reprojection error between the matched SegPatches and estimate camera poses through all these correspondences. Our proposed VO is evaluated on the TUM dataset both quantitatively and qualitatively, showing good balance to adapt to the environment under different realistic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Detection and Utilization of Reflections in LiDAR Scans through Plane Optimization and Plane SLAM.
- Author
-
Li, Yinjie, Zhao, Xiting, and Schwertfeger, Sören
- Subjects
- *
LIDAR , *ROBOTICS , *MIRRORS , *CLASSIFICATION , *GLASS - Abstract
In LiDAR sensing, glass, mirrors and other materials often cause inconsistent data readings from reflections. This causes problems in robotics and 3D reconstruction, especially with respect to localization, mapping and, thus, navigation. Extending our previous work, we construct a global, optimized map of reflective planes, in order to then classify all LiDAR readings at the end. For this, we optimize the reflective plane parameters of the plane detection of multiple scans. In a further method, we apply the reflective plane estimation in a plane SLAM algorithm, highlighting the applicability of our method for robotics. As our experiments will show, this approach provides superior classification accuracy compared to the single scan approach. The code and data for this work are available as open source online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A Novel Point Cloud Adaptive Filtering Algorithm for LiDAR SLAM in Forest Environments Based on Guidance Information.
- Author
-
Yang, Shuhang, Xing, Yanqiu, Wang, Dejun, and Deng, Hangyu
- Subjects
- *
OPTICAL radar , *LIDAR , *STANDARD deviations , *POINT cloud , *ADAPTIVE filters - Abstract
To address the issue of accuracy in Simultaneous Localization and Mapping (SLAM) for forested areas, a novel point cloud adaptive filtering algorithm is proposed in the paper, based on point cloud data obtained by backpack Light Detection and Ranging (LiDAR). The algorithm employs a K-D tree to construct the spatial position information of the 3D point cloud, deriving a linear model that is the guidance information based on both the original and filtered point cloud data. The parameters of the linear model are determined by minimizing the cost function using an optimization strategy, and a guidance point cloud filter is subsequently constructed based on these parameters. The results demonstrate that, comparing the diameter at breast height (DBH) and tree height before and after filtering with the measured true values, the accuracy of SLAM mapping is significantly improved after filtering. The Mean Absolute Error (MAE) of DBH before and after filtering are 2.20 cm and 1.16 cm; the Root Mean Square Error (RMSE) values are 4.78 cm and 1.40 cm; and the relative RMSE values are 29.30% and 8.59%. For tree height, the MAE before and after filtering are 0.76 m and 0.40 m; the RMSE values are 1.01 m and 0.50 m; the relative RMSE values are 7.33% and 3.65%. The experimental results validate that the proposed adaptive point cloud filtering method based on guided information is an effective point cloud preprocessing method for enhancing the accuracy of SLAM mapping in forested areas. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. 雪道点云时域波形特征分割及 SLAM 算法.
- Author
-
焦 倩, 马 飞, 王志伟, 杨岩立, 郑莉芳, and 刘博深
- Subjects
SKI resorts ,DOWNHILL skiing ,POINT cloud ,LIDAR ,INTERPOLATION - Abstract
Copyright of Journal of Harbin Institute of Technology. Social Sciences Edition / Haerbin Gongye Daxue Xuebao. Shehui Kexue Ban is the property of Harbin Institute of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
49. Patchlpr: a multi-level feature fusion transformer network for LiDAR-based place recognition.
- Author
-
Sun, Yang, Guo, Jianhua, Wang, Haiyang, Zhang, Yuhang, Zheng, Jiushuai, and Tian, Bin
- Abstract
LiDAR-based place recognition plays a crucial role in autonomous vehicles, enabling the identification of locations in GPS-invalid environments that were previously accessed. Localization in place recognition can be achieved by searching for nearest neighbors in the database. Two common types of place recognition features are local descriptors and global descriptors. Local descriptors typically compactly represent regions or points, while global descriptors provide an overarching view of the data. Despite the significant progress made in recent years by both types of descriptors, any representation inevitably involves information loss. To overcome this limitation, we have developed PatchLPR, a Transformer network employing multi-level feature fusion for robust place recognition. PatchLPR integrates global and local feature information, focusing on meaningful regions on the feature map to generate an environmental representation. We propose a patch feature extraction module based on the Vision Transformer to fully leverage the information and correlations of different features. We evaluated our approach on the KITTI dataset and a self-collected dataset covering over 4.2 km. The experimental results demonstrate that our method effectively utilizes multi-level features to enhance place recognition performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Three‐dimensionalized feature‐based LiDAR‐visual odometry for online mapping of unpaved road surfaces.
- Author
-
Lee, Junwoon, Kurisu, Masamitsu, and Kuriyama, Kazuya
- Subjects
PAVEMENTS ,GLOBAL Positioning System ,OPTICAL radar ,LIDAR ,HIGHWAY planning - Abstract
Automated maintenance and motion planning for unpaved roads are research areas of great interest in the field robotics. Constructing such systems necessitates the development of surface maps for unpaved roads. However, the lack of distinctive features on unpaved roads degrades the performance of light detection and ranging (LiDAR)‐based mapping. To address this problem, this paper proposes three‐dimensionalized feature‐based LiDAR‐visual odometry (TFB odometry) for the online mapping of unpaved road surfaces. TFB odometry introduces a novel interpolation concept to directly estimate the three‐dimensional coordinates of the image features using LiDAR. Furthermore, LiDAR intensity‐weighted motion estimation is proposed to effectively mitigate the effects of dust, which significantly impact the performance of LiDAR. Finally, TFB odometry includes pose graph optimization to efficiently fuse global navigation satellite system data and poses estimated from motion estimation. Through field experiments on unpaved roads, TFB odometry demonstrated successful online full mapping and outperformed other simultaneous localization and mapping methods. Additionally, it demonstrated remarkable performance in accurately mapping road surface anomalies, even in dusty regions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.