347 results on '"OPTICAL radar"'
Search Results
2. Assessment of NavVis VLX and BLK2GO SLAM Scanner Accuracy for Outdoor and Indoor Surveying Tasks.
- Author
-
Gharineiat, Zahra, Tarsha Kurdi, Fayez, Henny, Krish, Gray, Hamish, Jamieson, Aaron, and Reeves, Nicholas
- Subjects
- *
OPTICAL radar , *LIDAR , *CLOUDINESS , *POINT cloud , *ACQUISITION of data - Abstract
The Simultaneous Localization and Mapping (SLAM) scanner is an easy and portable Light Detection and Ranging (LiDAR) data acquisition device. Its main output is a 3D point cloud covering the scanned scene. Regarding the importance of accuracy in the survey domain, this paper aims to assess the accuracy of two SLAM scanners: the NavVis VLX and the BLK2GO scanner. This assessment is conducted for both outdoor and indoor environments. In this context, two types of reference data were used: the total station (TS) and the static scanner Z+F Imager 5016. To carry out the assessment, four comparisons were tested: cloud-to-cloud, cloud-to-mesh, mesh-to-mesh, and edge detection board assessment. However, the results of the assessments confirmed that the accuracy of indoor SLAM scanner measurements (5 mm) was greater than that of outdoor ones (between 10 mm and 60 mm). Moreover, the comparison of cloud-to-cloud provided the best accuracy regarding direct accuracy measurement without manipulations. Finally, based on the high accuracy, scanning speed, flexibility, and the accuracy differences between tested cases, it was confirmed that SLAM scanners are effective tools for data acquisition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Aerial Hybrid Adjustment of LiDAR Point Clouds, Frame Images, and Linear Pushbroom Images.
- Author
-
Jonassen, Vetle O., Kjørsvik, Narve S., Blankenberg, Leif Erik, and Gjevestad, Jon Glenn Omholt
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *PROBLEM solving , *DETECTORS - Abstract
In airborne surveying, light detection and ranging (LiDAR) strip adjustment and image bundle adjustment are customarily performed as separate processes. The bundle adjustment is usually conducted from frame images, while using linear pushbroom (LP) images in the bundle adjustment has been historically challenging due to the limited number of observations available to estimate the exterior image orientations. However, data from these three sensors conceptually provide information to estimate the same trajectory corrections, which is favorable for solving the problems of image depth estimation or the planimetric correction of LiDAR point clouds. Thus, our purpose with the presented study is to jointly estimate corrections to the trajectory and interior sensor states in a scalable hybrid adjustment between 3D LiDAR point clouds, 2D frame images, and 1D LP images. Trajectory preprocessing is performed before the low-frequency corrections are estimated for certain time steps in the following adjustment using cubic spline interpolation. Furthermore, the voxelization of the LiDAR data is used to robustly and efficiently form LiDAR observations and hybrid observations between the image tie-points and the LiDAR point cloud to be used in the adjustment. The method is successfully demonstrated with an experiment, showing the joint adjustment of data from the three different sensors using the same trajectory correction model with spline interpolation of the trajectory corrections. The results show that the choice of the trajectory segmentation time step is not critical. Furthermore, photogrammetric sub-pixel planimetric accuracy is achieved, and height accuracy on the order of mm is achieved for the LiDAR point cloud. This is the first time these three types of sensors with fundamentally different acquisition techniques have been integrated. The suggested methodology presents a joint adjustment of all sensor observations and lays the foundation for including additional sensors for kinematic mapping in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR.
- Author
-
Hu, Jiazun, Wen, Xin, Liu, Yunbo, Hu, Haonan, and Zhang, Hui
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *DEEP learning , *JUDGMENT (Psychology) - Abstract
Grain is a common bulk cargo. To ensure optimal utilization of transportation space and prevent overflow accidents, it is necessary to observe the grain's shape and determine the loading status during the loading process. Traditional methods often rely on manual judgment, which results in high labor intensity, poor safety, and low loading efficiency. Therefore, this paper proposes a method for recognizing the bulk grain-loading status based on Light Detection and Ranging (LiDAR). This method uses LiDAR to obtain point cloud data and constructs a deep learning network to perform target recognition and component segmentation on loading vehicles, extract vehicle positions and grain shapes, and recognize and make known the bulk grain-loading status. Based on the measured point cloud data of bulk grain loading, in the point cloud-classification task, the overall accuracy is 97.9% and the mean accuracy is 98.1%. In the vehicle component-segmentation task, the overall accuracy is 99.1% and the Mean Intersection over Union is 96.6%. The results indicate that the method has reliable performance in the research tasks of extracting vehicle positions, detecting grain shapes, and recognizing loading status. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Novel Point Cloud Adaptive Filtering Algorithm for LiDAR SLAM in Forest Environments Based on Guidance Information.
- Author
-
Yang, Shuhang, Xing, Yanqiu, Wang, Dejun, and Deng, Hangyu
- Subjects
- *
OPTICAL radar , *LIDAR , *STANDARD deviations , *POINT cloud , *ADAPTIVE filters - Abstract
To address the issue of accuracy in Simultaneous Localization and Mapping (SLAM) for forested areas, a novel point cloud adaptive filtering algorithm is proposed in the paper, based on point cloud data obtained by backpack Light Detection and Ranging (LiDAR). The algorithm employs a K-D tree to construct the spatial position information of the 3D point cloud, deriving a linear model that is the guidance information based on both the original and filtered point cloud data. The parameters of the linear model are determined by minimizing the cost function using an optimization strategy, and a guidance point cloud filter is subsequently constructed based on these parameters. The results demonstrate that, comparing the diameter at breast height (DBH) and tree height before and after filtering with the measured true values, the accuracy of SLAM mapping is significantly improved after filtering. The Mean Absolute Error (MAE) of DBH before and after filtering are 2.20 cm and 1.16 cm; the Root Mean Square Error (RMSE) values are 4.78 cm and 1.40 cm; and the relative RMSE values are 29.30% and 8.59%. For tree height, the MAE before and after filtering are 0.76 m and 0.40 m; the RMSE values are 1.01 m and 0.50 m; the relative RMSE values are 7.33% and 3.65%. The experimental results validate that the proposed adaptive point cloud filtering method based on guided information is an effective point cloud preprocessing method for enhancing the accuracy of SLAM mapping in forested areas. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds.
- Author
-
Wagner, Nike, Franke, Gunnar, Schmieder, Klaus, and Mandlburger, Gottfried
- Subjects
- *
AIRBORNE lasers , *POTAMOGETON , *AUTOMATIC classification , *MACROPHYTES , *POINT cloud , *OPTICAL radar , *LIDAR , *BODIES of water - Abstract
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel approach for the classification of submerged vegetation captured with bathymetric LiDAR (Light Detection And Ranging) as a basis for monitoring their state and change, and we validated the results against established monitoring techniques. Employing full-waveform airborne laser scanning, which is routinely used for topographic mapping and forestry applications on dry land, we extended its application to the detection of underwater vegetation in Lake Constance. The primary focus of this research lies in the automatic classification of bathymetric 3D LiDAR point clouds using a decision-based approach, distinguishing the three vegetation classes, (i) Low Vegetation, (ii) High Vegetation, and (iii) Vegetation Canopy, based on their height and other properties like local point density. The results reveal detailed 3D representations of submerged vegetation, enabling the identification of vegetation structures and the inference of vegetation types with reference to pre-existing knowledge. While the results within the training areas demonstrate high precision and alignment with the comparison data, the findings in independent test areas exhibit certain deficiencies that are likely addressable through corrective measures in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Integrating NoSQL, Hilbert Curve, and R*-Tree to Efficiently Manage Mobile LiDAR Point Cloud Data.
- Author
-
Yang, Yuqi, Zuo, Xiaoqing, Zhao, Kang, and Li, Yongfa
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *NONRELATIONAL databases , *ELECTRONIC data processing - Abstract
The widespread use of Light Detection and Ranging (LiDAR) technology has led to a surge in three-dimensional point cloud data; although, it also poses challenges in terms of data storage and indexing. Efficient storage and management of LiDAR data are prerequisites for data processing and analysis for various LiDAR-based scientific applications. Traditional relational database management systems and centralized file storage struggle to meet the storage, scaling, and specific query requirements of massive point cloud data. However, NoSQL databases, known for their scalability, speed, and cost-effectiveness, provide a viable solution. In this study, a 3D point cloud indexing strategy for mobile LiDAR point cloud data that integrates Hilbert curves, R*-trees, and B+-trees was proposed to support MongoDB-based point cloud storage and querying from the following aspects: (1) partitioning the point cloud using an adaptive space partitioning strategy to improve the I/O efficiency and ensure data locality; (2) encoding partitions using Hilbert curves to construct global indices; (3) constructing local indexes (R*-trees) for each point cloud partition so that MongoDB can natively support indexing of point cloud data; and (4) a MongoDB-oriented storage structure design based on a hierarchical indexing structure. We evaluated the efficacy of chunked point cloud data storage with MongoDB for spatial querying and found that the proposed storage strategy provides higher data encoding, index construction and retrieval speeds, and more scalable storage structures to support efficient point cloud spatial query processing compared to many mainstream point cloud indexing strategies and database systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A Point Cloud Dataset of Vehicles Passing through a Toll Station for Use in Training Classification Algorithms.
- Author
-
Campo-Ramírez, Alexander, Caicedo-Bravo, Eduardo F., and Bacca-Cortes, Eval B.
- Subjects
OPTICAL radar ,INTELLIGENT transportation systems ,ARTIFICIAL vision ,POINT cloud ,DOPPLER effect - Abstract
This work presents a point cloud dataset of vehicles passing through a toll station in Colombia to be used to train artificial vision and computational intelligence algorithms. This article details the process of creating the dataset, covering initial data acquisition, range information preprocessing, point cloud validation, and vehicle labeling. Additionally, a detailed description of the structure and content of the dataset is provided, along with some potential applications of its use. The dataset consists of 36,026 total objects divided into 6 classes: 31,432 cars, campers, vans and 2-axle trucks with a single tire on the rear axle, 452 minibuses with a single tire on the rear axle, 1158 buses, 1179 2-axle small trucks, 797 2-axle large trucks, and 1008 trucks with 3 or more axles. The point clouds were captured using a LiDAR sensor and Doppler effect speed sensors. The dataset can be used to train and evaluate algorithms for range data processing, vehicle classification, vehicle counting, and traffic flow analysis. The dataset can also be used to develop new applications for intelligent transportation systems. Dataset: The data presented in this study are openly available at: https://doi.org/10.5281/zenodo.10974361 Dataset License: Creative Commons Attribution 4.0 International [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Tree Segmentation Algorithm for Airborne Light Detection and Ranging Data Based on Graph Theory and Clustering.
- Author
-
Seidl, Jakub, Kačmařík, Michal, and Klimánek, Martin
- Subjects
OPTICAL radar ,LIDAR ,POINT cloud ,DRONE aircraft ,GRAPH theory ,AIRBORNE lasers - Abstract
This paper presents a single tree segmentation method applied to 3D point cloud data acquired with a LiDAR scanner mounted on an unmanned aerial vehicle (UAV). The method itself is based on clustering methods and graph theory and uses only the spatial properties of points. Firstly, the point cloud is reduced to clusters with DBSCAN. Those clusters are connected to a 3D graph, and then graph partitioning and further refinements are applied to obtain the final segments. Multiple datasets were acquired for two test sites in the Czech Republic which are covered by commercial forest to evaluate the influence of laser scanning parameters and forest characteristics on segmentation results. The accuracy of segmentation was compared with manual labels collected on top of the orthophoto image and reached between 82 and 93% depending on the test site and laser scanning parameters. Additionally, an area-based approach was employed for validation using field-measured data, where the distribution of tree heights in plots was analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An Individual Tree Detection and Segmentation Method from TLS and MLS Point Clouds Based on Improved Seed Points.
- Author
-
Chen, Qiuji, Luo, Hao, Cheng, Yan, Xie, Mimi, and Nan, Dandan
- Subjects
OPTICAL radar ,LIDAR ,CLOUD condensation nuclei ,POINT cloud ,K-nearest neighbor classification - Abstract
Individual Tree Detection and Segmentation (ITDS) is a key step in accurately extracting forest structural parameters from LiDAR (Light Detection and Ranging) data. However, most ITDS algorithms face challenges with over-segmentation, under-segmentation, and the omission of small trees in high-density forests. In this study, we developed a bottom–up framework for ITDS based on seed points. The proposed method is based on density-based spatial clustering of applications with noise (DBSCAN) to initially detect the trunks and filter the clusters by a set threshold. Then, the K-Nearest Neighbor (KNN) algorithm is used to reclassify the non-core clustered point cloud after threshold filtering. Furthermore, the Random Sample Consensus (RANSAC) cylinder fitting algorithm is used to correct the trunk detection results. Finally, we calculate the centroid of the trunk point clouds as seed points to achieve individual tree segmentation (ITS). In this paper, we use terrestrial laser scanning (TLS) data from natural forests in Germany and mobile laser scanning (MLS) data from planted forests in China to explore the effects of seed points on the accuracy of ITS methods; we then evaluate the efficiency of the method from three aspects: trunk detection, overall segmentation and small tree segmentation. We show the following: (1) the proposed method addresses the issues of missing segmentation and misrecognition of DBSCAN in trunk detection. Compared to using DBSCAN directly, recall (r), precision (p), and F-score (F) increased by 6.0%, 6.5%, and 0.07, respectively; (2) seed points significantly improved the accuracy of ITS methods; (3) the proposed ITDS framework achieved overall r, p, and F of 95.2%, 97.4%, and 0.96, respectively. This work demonstrates excellent accuracy in high-density forests and is able to accurately segment small trees under tall trees. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. LiDAR Point Cloud Super-Resolution Reconstruction Based on Point Cloud Weighted Fusion Algorithm of Improved RANSAC and Reciprocal Distance.
- Author
-
Yang, Xiaoping, Ni, Ping, Li, Zhenhua, and Liu, Guanghui
- Subjects
POINT cloud ,OPTICAL radar ,LIDAR ,MULTICASTING (Computer networks) ,YIELD strength (Engineering) ,ALGORITHMS ,SPACE-based radar - Abstract
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution point clouds with higher-resolution point clouds at the data level, the algorithm generates high-resolution point clouds, achieving the super-resolution reconstruction of lidar point clouds. This method effectively reduces noise in the higher-resolution point clouds while preserving the structure of the low-resolution point clouds, ensuring that the semantic information of the generated high-resolution point clouds remains consistent with that of the low-resolution point clouds. Specifically, the algorithm constructs a K-d tree using the low-resolution point cloud to perform a nearest neighbor search, establishing the correspondence between the low-resolution and higher-resolution point clouds. Next, the improved RANSAC algorithm is employed for point cloud alignment, and inverse distance weighting is used for point-by-point weighted fusion, ultimately yielding the high-resolution point cloud. The experimental results demonstrate that the proposed point cloud super-resolution reconstruction method outperforms other methods across various metrics. Notably, it reduces the Chamfer Distance (CD) metric by 0.49 and 0.29 and improves the Precision metric by 7.75% and 4.47%, respectively, compared to two other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Accurate Calculation of Upper Biomass Volume of Single Trees Using Matrixial Representation of LiDAR Data.
- Author
-
Tarsha Kurdi, Fayez, Lewandowicz, Elżbieta, Gharineiat, Zahra, and Shan, Jie
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *BIOMASS , *CENTER of mass - Abstract
This paper introduces a novel method for accurately calculating the upper biomass of single trees using Light Detection and Ranging (LiDAR) point cloud data. The proposed algorithm involves classifying the tree point cloud into two distinct ones: the trunk point cloud and the crown point cloud. Each part is then processed using specific techniques to create a 3D model and determine its volume. The trunk point cloud is segmented based on individual stems, each of which is further divided into slices that are modeled as cylinders. On the other hand, the crown point cloud is analyzed by calculating its footprint and gravity center. The footprint is further divided into angular sectors, with each being used to create a rotating surface around the vertical line passing through the gravity center. All models are represented in a matrix format, simplifying the process of minimizing and calculating the tree's upper biomass, consisting of crown biomass and trunk biomass. To validate the proposed approach, both terrestrial and airborne datasets are utilized. A comparison with existing algorithms in the literature confirms the effectiveness of the new method. For a tree dimensions estimation, the study shows that the proposed algorithm achieves an average fit between 0.01 m and 0.49 m for individual trees. The maximum absolute quantitative accuracy equals 0.49 m, and the maximum relative absolute error equals 0.29%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Effective Training and Inference Strategies for Point Classification in LiDAR Scenes.
- Author
-
Carós, Mariona, Just, Ariadna, Seguí, Santi, and Vitrià, Jordi
- Subjects
- *
AIRBORNE lasers , *OPTICAL radar , *LIDAR , *SURFACE of the earth , *POINT cloud , *CLASSIFICATION - Abstract
Light Detection and Ranging systems serve as robust tools for creating three-dimensional representations of the Earth's surface. These representations are known as point clouds. Point cloud scene segmentation is essential in a range of applications aimed at understanding the environment, such as infrastructure planning and monitoring. However, automating this process can result in notable challenges due to variable point density across scenes, ambiguous object shapes, and substantial class imbalances. Consequently, manual intervention remains prevalent in point classification, allowing researchers to address these complexities. In this work, we study the elements contributing to the automatic semantic segmentation process with deep learning, conducting empirical evaluations on a self-captured dataset by a hybrid airborne laser scanning sensor combined with two nadir cameras in RGB and near-infrared over a 247 km2 terrain characterized by hilly topography, urban areas, and dense forest cover. Our findings emphasize the importance of employing appropriate training and inference strategies to achieve accurate classification of data points across all categories. The proposed methodology not only facilitates the segmentation of varying size point clouds but also yields a significant performance improvement compared to preceding methodologies, achieving a mIoU of 94.24% on our self-captured dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A Survey on Data Compression Techniques for Automotive LiDAR Point Clouds.
- Author
-
Roriz, Ricardo, Silva, Heitor, Dias, Francisco, and Gomes, Tiago
- Subjects
- *
POINT cloud , *CLOUD storage , *DATA compression , *OPTICAL radar , *LIDAR , *GEOGRAPHICAL perception , *AUTOMOTIVE sensors - Abstract
In the evolving landscape of autonomous driving technology, Light Detection and Ranging (LiDAR) sensors have emerged as a pivotal instrument for enhancing environmental perception. They can offer precise, high-resolution, real-time 3D representations around a vehicle, and the ability for long-range measurements under low-light conditions. However, these advantages come at the cost of the large volume of data generated by the sensor, leading to several challenges in transmission, processing, and storage operations, which can be currently mitigated by employing data compression techniques to the point cloud. This article presents a survey of existing methods used to compress point cloud data for automotive LiDAR sensors. It presents a comprehensive taxonomy that categorizes these approaches into four main groups, comparing and discussing them across several important metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Study of Tunnel Vehicle GNSS/INS/OD Combination Position Based on Lateral Distance Measurement and Lane Line Constraint.
- Author
-
Zhang, Hongbin and Zhang, Xu
- Subjects
GLOBAL Positioning System ,RAILROAD tunnels ,OPTICAL radar ,POINT cloud - Abstract
The high-precision dynamic positioning of highway vehicles is the foundation and prerequisite for achieving intelligent connected transportation. To address the shortcomings of the GNSS/INS combination and GNSS/INS/OD combination in tunnel vehicle positioning, this paper proposes a tunnel vehicle positioning method for the GNSS/INS/OD combination based on lateral distance measurements and lane constraints. Firstly, a lateral distance measurement of vehicles inside the tunnel is conducted based on laser radar point cloud data. Secondly, map matching positioning is performed based on lateral distance measurements, odometer, and lane markings. Experimental results demonstrate that, for a 4.6 km tunnel, the average absolute error in the lateral positioning is 0.294 m, and the longitudinal positioning error is no more than 0.6 m, which can effectively meet practical operational requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Remote Detection of Geothermal Alteration Using Airborne Light Detection and Ranging Return Intensity.
- Author
-
Freski, Yan Restu, Hecker, Christoph, Meijde, Mark van der, and Setianto, Agung
- Subjects
- *
OPTICAL radar , *LIDAR , *AIRBORNE-based remote sensing , *HYDROTHERMAL alteration , *POINT cloud - Abstract
The remote detection of hydrothermally altered grounds in geothermal exploration demands datasets capable of reliably detecting key outcrops with fine spatial resolution. While optical thermal or radar-based datasets have resolution limitations, airborne LiDAR offers point-based detection through its LiDAR return intensity (LRI) values, serving as a proxy for surface reflectivity. Despite this potential, few studies have explored LRI value variations in the context of hydrothermal alteration and their utility in distinguishing altered from unaltered rocks. Although the link between alteration degree and LRI values has been established under laboratory conditions, this relationship has yet to be demonstrated in airborne data. This study investigates the applicability of laboratory results to airborne LRI data for alteration detection. Utilising LRI data from an airborne LiDAR point cloud (wavelength 1064 nm, density 12 points per square metre) acquired over a prospective geothermal area in Bajawa, Indonesia, where rock sampling for a related laboratory study took place, we compare the airborne LRI values within each ground sampling area of a 3 m radius (due to hand-held GPS uncertainty) with laboratory LRI values of corresponding rock samples. Our findings reveal distinguishable differences between strongly altered and unaltered samples, with LRI discrepancies of approximately ~28 for airborne data and ~12 for laboratory data. Furthermore, the relative trends of airborne and laboratory-based LRI data concerning alteration degree exhibit striking similarity. These consistent results for alteration degree in laboratory and airborne data mark a significant step towards LRI-based alteration mapping from airborne platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Usage of a Conventional Device with LiDAR Implementation for Mesh Model Creation.
- Author
-
Smrčková, Daša, Chromčák, Jakub, Ižvoltová, Jana, and Sásik, Róbert
- Subjects
OPTICAL radar ,LIDAR ,BUILDING information modeling ,POINT cloud ,THREE-dimensional printing ,WIRELESS mesh networks - Abstract
The trend of using conventional devices like mobile phones, tablets, and the other devices is gaining traction in improving customer service practices. This coincides with the growing popularity of building information modeling (BIM), which has led to increased exploration of various 3D object capture methods. Additionally, the technological boom has resulted in a surge of applications working with different 3D model formats including mesh models, point cloud, and TIN models. Among these, the usage of mesh models is experiencing particularly rapid growth. The main objective advantages of mesh models are their efficiency, scalability, flexibility, sense of detail, user-friendliness, and compatibility. The idea of this paper is to use a conventional device, specifically an iPad Pro equipped with light detection and ranging (LiDAR) technology, for creating mesh models. The different data capture methods employed by various applications will be compared to evaluate the final models´ precision. The accuracy of the 3D models generated by each application will be assessed by comparing the spatial coordinates of identical points distributed irregularly across the entire surface of the chosen object. Various available currently most-used applications were utilized in the process of data collection. In general, 3D representations of the object/area, etc., may be visualized, analyzed, and further processed in more formats such as TIN models, point cloud, or mesh models. Mesh models provide a visualization of the object mirroring the solid design of the real object, thus approximating reality in the closest way. This fact, along with automatized postprocessing after data acquisition, the ability to capture and visualize both convex and concave objects, and the possibility to use this type of 3D visualization for 3D printing, contribute to the decision to test and analyze mesh models. Consequently, the mesh models were created via the automatic post-processing, i.e., without external intervention. This fact leads to the problems of random coordinate systems being automatically pre-defined by every application. This research must deal with the resulting obstacles in order to provide a valid and credible comparative analysis. Various criteria may be applied to the mesh models' comparisons, including objective qualitative and quantitative parameters and also the subjective ones. The idea of this research is not to analyze the data acquisition process in detail, but instead to assess the possibilities of the applications for the basic users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Adaptive Scale and Correlative Attention PointPillars: An Efficient Real-Time 3D Point Cloud Object Detection Algorithm.
- Author
-
Zhai, Xinchao, Gao, Yang, Chen, Shiwei, and Yang, Jingshuai
- Subjects
POINT cloud ,OBJECT recognition algorithms ,OPTICAL radar ,LIDAR ,OBJECT recognition (Computer vision) ,DATA augmentation - Abstract
Recognizing 3D objects from point clouds is a crucial technology for autonomous vehicles. Nevertheless, LiDAR (Light Detection and Ranging) point clouds are generally sparse, and they provide limited contextual information, resulting in unsatisfactory recognition performance for distant or small objects. Consequently, this article proposes an object recognition algorithm named Adaptive Scale and Correlative Attention PointPillars (ASCA-PointPillars) to address this problem. Firstly, an innovative adaptive scale pillars (ASP) encoding method is proposed, which encodes point clouds using pillars of varying sizes. Secondly, ASCA-PointPillars introduces a feature enhancement mechanism called correlative point attention (CPA) to enhance the feature associations within each pillar. Additionally, a data augmentation algorithm called random sampling data augmentation (RS-Aug) is proposed to solve the class imbalance problem. The experimental results on the KITTI 3D object dataset demonstrate that the proposed ASCA-PointPillars algorithm significantly boosts the recognition performance and RS-Aug effectively enhances the training effects on an imbalanced dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A Novel Multi-LiDAR-Based Point Cloud Stitching Method Based on a Constrained Particle Filter.
- Author
-
Ji, Gaofan, He, Yunhan, Li, Chuanxiang, Fan, Li, Wang, Haibo, and Zhu, Yantong
- Subjects
POINT cloud ,OPTICAL radar ,LIDAR ,COAL-fired power plants ,SERVOMECHANISMS - Abstract
In coal-fired power plants, coal piles serve as the fundamental management units. Acquiring point clouds of coal piles facilitates the convenient measurement of daily coal consumption and combustion efficiency. When using servo motors to drive Light Detection and Ranging (LiDAR) scanning of large-scale coal piles, the motors are subject to rotational errors due to gravitational effects. As a result, the acquired point clouds often contain significant noise. To address this issue, we proposes a Rapid Point Cloud Stitching–Constrained Particle Filter (RPCS-CPF) method. By introducing random noise to simulate servo motor rotational errors, both local and global point clouds are sequentially subjected to RPCS-CPF operations, resulting in smooth and continuous coal pile point clouds. Moreover, this paper presents a coal pile boundary detection method based on gradient region growing clustering. Experimental results demonstrate that our proposed RPCS-CPF method can generate smooth and continuous coal pile point clouds, even in the presence of servo motor rotational errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A Coal Mine Tunnel Deformation Detection Method Using Point Cloud Data.
- Author
-
Kang, Jitong, Li, Mei, Mao, Shanjun, Fan, Yingbo, Wu, Zheng, and Li, Ben
- Subjects
- *
OPTICAL scanners , *POINT cloud , *TUNNELS , *OPTICAL radar , *COAL mining , *COAL mining safety , *MINES & mineral resources , *COAL mining accidents - Abstract
In recent years, the deformation detection technology for underground tunnels has played a crucial role in coal mine safety management. Currently, traditional methods such as the cross method and those employing the roof abscission layer monitoring instrument are primarily used for tunnel deformation detection in coal mines. With the advancement of photogrammetric methods, three-dimensional laser scanners have gradually become the primary method for deformation detection of coal mine tunnels. However, due to the high-risk confined spaces and distant distribution of coal mine tunnels, stationary three-dimensional laser scanning technology requires a significant amount of labor and time, posing certain operational risks. Currently, mobile laser scanning has become a popular method for coal mine tunnel deformation detection. This paper proposes a method for detecting point cloud deformation of underground coal mine tunnels based on a handheld three-dimensional laser scanner. This method utilizes SLAM laser radar to obtain complete point cloud information of the entire tunnel, while projecting the three-dimensional point cloud onto different planes to obtain the coordinates of the tunnel centerline. By using the calculated tunnel centerline, the three-dimensional point cloud data collected at different times are matched to the same coordinate system, and then the tunnel deformation parameters are analyzed separately from the global and cross-sectional perspectives. Through on-site collection of tunnel data, this paper verifies the feasibility of the algorithm and compares it with other centerline fitting and point cloud registration algorithms, demonstrating higher accuracy and meeting practical needs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Deep Ordinal Classification in Forest Areas Using Light Detection and Ranging Point Clouds.
- Author
-
Morales-Martín, Alejandro, Mesas-Carrascosa, Francisco-Javier, Gutiérrez, Pedro Antonio, Pérez-Porras, Fernando-Juan, Vargas, Víctor Manuel, and Hervás-Martínez, César
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *CROSS-entropy method , *INCEPTISOLS , *CLASSIFICATION - Abstract
Recent advances in Deep Learning and aerial Light Detection And Ranging (LiDAR) have offered the possibility of refining the classification and segmentation of 3D point clouds to contribute to the monitoring of complex environments. In this context, the present study focuses on developing an ordinal classification model in forest areas where LiDAR point clouds can be classified into four distinct ordinal classes: ground, low vegetation, medium vegetation, and high vegetation. To do so, an effective soft labeling technique based on a novel proposed generalized exponential function (CE-GE) is applied to the PointNet network architecture. Statistical analyses based on Kolmogorov–Smirnov and Student's t-test reveal that the CE-GE method achieves the best results for all the evaluation metrics compared to other methodologies. Regarding the confusion matrices of the best alternative conceived and the standard categorical cross-entropy method, the smoothed ordinal classification obtains a more consistent classification compared to the nominal approach. Thus, the proposed methodology significantly improves the point-by-point classification of PointNet, reducing the errors in distinguishing between the middle classes (low vegetation and medium vegetation). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Weighted Differential Gradient Method for Filling Pits in Light Detection and Ranging (LiDAR) Canopy Height Model.
- Author
-
Zhou, Guoqing, Li, Haowen, Huang, Jing, Gao, Ertao, Song, Tianyi, Han, Xiaoting, Zhu, Shuaiguang, and Liu, Jun
- Subjects
- *
OPTICAL radar , *LIDAR , *PIXELS , *CONIFEROUS forests , *IMAGE processing , *POINT cloud - Abstract
The canopy height model (CHM) derived from LiDAR point cloud data is usually used to accurately identify the position and the canopy dimension of single tree. However, local invalid values (also called data pits) are often encountered during the generation of CHM, which results in low-quality CHM and failure in the detection of treetops. For this reason, this paper proposes an innovative method, called "pixels weighted differential gradient", to filter these data pits accurately and improve the quality of CHM. First, two characteristic parameters, gradient index (GI) and Z-score value (ZV) are extracted from the weighted differential gradient between the pit pixels and their eight neighbors, and then GIs and ZVs are commonly used as criterion for initial identification of data pits. Secondly, CHMs of different resolutions are merged, using the image processing algorithm developed in this paper to distinguish either canopy gaps or data pits. Finally, potential pits were filtered and filled with a reasonable value. The experimental validation and comparative analysis were carried out in a coniferous forest located in Triangle Lake, United States. The experimental results showed that our method could accurately identify potential data pits and retain the canopy structure information in CHM. The root-mean-squared error (RMSE) and mean bias error (MBE) from our method are reduced by between 73% and 26% and 76% and 28%, respectively, when compared with six other methods, including the mean filter, Gaussian filter, median filter, pit-free, spike-free and graph-based progressive morphological filtering (GPMF). The average F1 score from our method could be improved by approximately 4% to 25% when applied in single-tree extraction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios.
- Author
-
He, Yuhang, Li, Bo, Ruan, Jianyuan, Yu, Aihua, and Hou, Beiping
- Subjects
LIDAR ,OPTICAL radar ,POINT cloud - Abstract
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms with limited computational power and low-resolution three-dimensional LiDAR sensors (16-beam LiDAR), and fills the gaps in the existing literature. Our data include abundant scenarios that include degenerated environments, dynamic objects, and large slope terrain to facilitate the investigation of the performance of the SLAM system. We provided the ground truth pose from RTK-GPS and carefully rectified its elevation errors, and designed an extra method to evaluate the vertical drift. The module for calibrating the LiDAR and IMU was also enhanced to ensure the precision of point cloud data. The reliability and applicability of the dataset are fully tested through a series of experiments using several state-of-the-art LiDAR SLAM methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Three-Dimensional Point Cloud Object Detection Based on Feature Fusion and Enhancement.
- Author
-
Li, Yangyang, Ou, Zejun, Liu, Guangyuan, Yang, Zichen, Chen, Yanqiao, Shang, Ronghua, and Jiao, Licheng
- Subjects
- *
OBJECT recognition (Computer vision) , *POINT cloud , *OPTICAL radar , *LIDAR , *FEATURE extraction - Abstract
With the continuous emergence and development of 3D sensors in recent years, it has become increasingly convenient to collect point cloud data for 3D object detection tasks, such as the field of autonomous driving. But when using these existing methods, there are two problems that cannot be ignored: (1) The bird's eye view (BEV) is a widely used method in 3D objective detection; however, the BEV usually compresses dimensions by combined height, dimension, and channels, which makes the process of feature extraction in feature fusion more difficult. (2) Light detection and ranging (LiDAR) has a much larger effective scanning depth, which causes the sector to become sparse in deep space and the uneven distribution of point cloud data. This results in few features in the distribution of neighboring points around the key points of interest. The following is the solution proposed in this paper: (1) This paper proposes multi-scale feature fusion composed of feature maps at different levels made of Deep Layer Aggregation (DLA) and a feature fusion module for the BEV. (2) A point completion network is used to improve the prediction results by completing the feature points inside the candidate boxes in the second stage, thereby strengthening their position features. Supervised contrastive learning is applied to enhance the segmentation results, improving the discrimination capability between the foreground and background. Experiments show these new additions can achieve improvements of 2.7%, 2.4%, and 2.5%, respectively, on KITTI easy, moderate, and hard tasks. Further ablation experiments show that each addition has promising improvement over the baseline. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Monitoring and Quantifying Soil Erosion and Sedimentation Rates in Centimeter Accuracy Using UAV-Photogrammetry, GNSS, and t-LiDAR in a Post-Fire Setting.
- Author
-
Alexiou, Simoni, Papanikolaou, Ioannis, Schneiderwind, Sascha, Kehrle, Valerie, and Reicherter, Klaus
- Subjects
- *
WILDFIRES , *SOIL erosion , *GLOBAL Positioning System , *SEDIMENTATION & deposition , *OPTICAL radar , *LIDAR - Abstract
Remote sensing techniques, namely Unmanned Aerial Vehicle (UAV) photogrammetry and t-LiDAR (terrestrial Light Detection and Ranging), two well-established techniques, were applied for seven years in a mountainous Mediterranean catchment in Greece (Ilioupoli test site, Athens), following a wildfire event in 2015. The goal was to monitor and quantify soil erosion and sedimentation rates with cm accuracy. As the frequency of wildfires in the Mediterranean has increased, this study aims to present a methodological approach for monitoring and quantifying soil erosion and sedimentation rates in post-fire conditions, through high spatial resolution field measurements acquired using a UAV survey and a t-LiDAR (or TLS—Terrestrial Laser Scanning), in combination with georadar profiles (Ground Penetration Radar—GPR) and GNSS. This test site revealed that 40 m3 of sediment was deposited following the first intense autumn rainfall events, a value that was decreased by 50% over the next six months (20 m3). The UAV–SfM technique revealed only 2 m3 of sediment deposition during the 2018–2019 analysis, highlighting the decrease in soil erosion rates three years after the wildfire event. In the following years (2017–2021), erosion and sedimentation decreased further, confirming the theoretical pattern, whereas sedimentation over the first year after the fire was very high and then sharply lessened as vegetation regenerated. The methodology proposed in this research can serve as a valuable guide for achieving high-precision sediment yield deposition measurements based on a detailed analysis of 3D modeling and a point cloud comparison, specifically leveraging the dense data collection facilitated by UAV–SfM and TLS technology. The resulting point clouds effectively replicate the fine details of the topsoil microtopography within the upland dam basin, as highlighted by the profile analysis. Overall, this research clearly demonstrates that after monitoring the upland area in post-fire conditions, the UAV–SfM method and LiDAR cm-scale data offer a realistic assessment of the retention dam's life expectancy and management planning. These observations are especially crucial for assessing the impacts in the wildfire-affected areas, the implementation of mitigation strategies, and the construction and maintenance of retention dams. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Single Person Identification and Activity Estimation in a Room from Waist-Level Contours Captured by 2D Light Detection and Ranging.
- Author
-
Enoki, Mizuki, Watanabe, Kai, and Noguchi, Hiroshi
- Subjects
- *
OPTICAL radar , *LIDAR , *DEEP learning , *IMAGE recognition (Computer vision) , *SINGLE people , *POINT cloud - Abstract
To develop socially assistive robots for monitoring older adults at home, a sensor is required to identify residents and capture activities within the room without violating privacy. We focused on 2D Light Detection and Ranging (2D-LIDAR) capable of robustly measuring human contours in a room. While horizontal 2D contour data can provide human location, identifying humans and activities from these contours is challenging. To address this issue, we developed novel methods using deep learning techniques. This paper proposes methods for person identification and activity estimation in a room using contour point clouds captured by a single 2D-LIDAR at hip height. In this approach, human contours were extracted from 2D-LIDAR data using density-based spatial clustering of applications with noise. Subsequently, the person and activity within a 10-s interval were estimated employing deep learning techniques. Two deep learning models, namely Long Short-Term Memory (LSTM) and image classification (VGG16), were compared. In the experiment, a total of 120 min of walking data and 100 min of additional activities (door opening, sitting, and standing) were collected from four participants. The LSTM-based and VGG16-based methods achieved accuracies of 65.3% and 89.7%, respectively, for person identification among the four individuals. Furthermore, these methods demonstrated accuracies of 94.2% and 97.9%, respectively, for the estimation of the four activities. Despite the 2D-LIDAR point clouds at hip height containing small features related to gait, the results indicate that the VGG16-based method has the capability to identify individuals and accurately estimate their activities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Base Study of Bridge Inspection by Modeling Touch Information Using Light Detection and Ranging.
- Author
-
Fukuoka, Tomotaka, Minami, Takahiro, and Fujiu, Makoto
- Subjects
OPTICAL radar ,LIDAR ,BRIDGE inspection ,POINT cloud ,BUDGET ,PHYSICAL contact - Abstract
In Japan, bridges are inspected via close visual examinations every five years. However, these inspections are labor intensive, and a shortage of engineers and budget constraints will restrict such inspections in the future. In recent years, efforts have been made to reduce the labor required for inspections by automating various aspects of the inspection process. In this study, we proposed and evaluated a method of applying super-resolution technology to obtain precise point cloud information to create distance information images to enable the use of tactile information (e.g., human touch) on the surface to be inspected. We measured the distance to the specimen using LiDAR, generated distance information images, performed super-resolution on the pseudo-created low-resolution images, and evaluated them in comparison with the existing magnification method. The evaluation results suggest that the adaptation of the super-resolution technique is effective in increasing the resolution of the boundary of the distance change. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Monitoring of Levee Deformation for Urban Flood Risk Management Using Airborne 3D Point Clouds.
- Author
-
Wang, Xianwei, Wang, Yidan, Liao, Xionghui, Huang, Ying, Wang, Yuli, Ling, Yibo, and Chan, Ting On
- Subjects
LEVEES ,POINT cloud ,OPTICAL radar ,LIDAR ,STORM surges ,FLOOD risk ,GEOGRAPHIC information systems ,CITIES & towns - Abstract
In the low-lying, river-rich Pearl River Delta in South China, an extensive network of flood defense levees, spanning over 4400 km, plays a crucial role in urban flood management. These levees are designed to withstand floods and storm surges, yet their failure can lead to significant human and economic losses, highlighting the need for robust urban flood defense strategies. This necessitates the development of a sophisticated geographic information system for the levee network and rapid, accurate assessment methods for levee conditions to support water management and flood mitigation efforts. This study focuses on the levees along the Hengmen waterway in the Pearl River Delta, utilizing airborne Light Detection and Ranging (LiDAR) technology to gather 3D spatial data of the levees. Employing the Cloth Simulation Filter (CSF) algorithm, non-ground point cloud data were extracted. The study improved upon the region-growing algorithm, using a seed point set approach for the automatic extraction of levee point cloud data. The accuracy and completeness of levee extraction were evaluated using the quality index. This method achieved effective extraction of four levee types, showing significant improvements over traditional algorithms, with extraction quality ranging from 72% to 83%. Key research outcomes include the development of a novel method for detecting localized levee depressions based on the computation of the variance of angles between normal vectors in single-phase levee point cloud data. An adaptive optimal neighborhood approach was utilized to accurately determine the normal vectors, effectively representing the local morphology of the levee point clouds. Applied in three levee depression detection experiments, this method proved effective, demonstrating the capability of single-phase data in identifying regions of levee depression deformation. This advancement in levee monitoring technology marks a significant step forward in enhancing urban flood defense capabilities in regions such as the cities of the Pearl River Delta in China. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Estimation of Above-Ground Forest Biomass in Nepal by the Use of Airborne LiDAR, and Forest Inventory Data.
- Author
-
KC, Yam Bahadur, Liu, Qijing, Saud, Pradip, Gaire, Damodar, and Adhikari, Hari
- Subjects
FOREST biomass ,FOREST surveys ,POINT cloud ,LIDAR ,OPTICAL radar ,CARBON cycle - Abstract
Forests play a significant role in sequestering carbon and regulating the global carbon and energy cycles. Accurately estimating forest biomass is crucial for understanding carbon stock and sequestration, forest degradation, and climate change mitigation. This study was conducted to estimate above-ground biomass (AGB) and compare the accuracy of the AGB estimating models using LiDAR (light detection and ranging) data and forest inventory data in the central Terai region of Nepal. Airborne LiDAR data were collected in 2021 and made available by Nepal Ban Nigam Limited, Government of Nepal. Thirty-two metrics derived from the laser-scanned LiDAR point cloud data were used as predictor variables (independent variables), while the AGB calculated from field data at the plot level served as the response variable (dependent variable). The predictor variables in this study were LiDAR-based height and canopy metrics. Two statistical methods, the stepwise linear regression (LR) and the random forest (RF) models, were used to estimate forest AGB. The output was an accurate map of AGB for each model. The RF method demonstrated better precision compared to the stepwise LR model, as the R
2 metric increased from 0.65 to 0.85, while the RMSE values decreased correspondingly from 105.88 to 60.9 ton/ha. The estimated AGB density varies from 0 to 446 ton/ha among the sample plots. This study revealed that the height-based LiDAR metrics, such as height percentile or maximum height, can accurately and precisely predict AGB quantities in tropical forests. Consequently, we confidently assert that substantial potential exists to monitor AGB levels in forests effectively by employing airborne LiDAR technology in combination with field inventory data. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
30. BIMBot for Autonomous Laser Scanning in Built Environments.
- Author
-
Liang, Nanying, Ang, Yu Pin, Yeo, Kaiyun, Wu, Xiao, Xie, Yuan, and Cai, Yiyu
- Subjects
BUILT environment ,OPTICAL radar ,LIDAR ,BUILDING information modeling ,POINT cloud - Abstract
Accurate and complete 3D point clouds are essential in creating as-built building information modeling (BIM) models, although there are challenges in automating the process for 3D point cloud creation in complex environments. In this paper, an autonomous scanning system named BIMBot is introduced, which integrates advanced light detection and ranging (LiDAR) technology with robotics to create 3D point clouds. Using our specially developed algorithmic pipeline for point cloud processing, iterative registration refinement, and next best view (NBV) calculation, this system facilitates an efficient, accurate, and fully autonomous scanning process. The BIMBot's performance was validated using a case study in a campus laboratory, featuring complex structural and mechanical, electrical, and plumbing (MEP) elements. The experimental results showed that the autonomous scanning system produced 3D point cloud mappings in fewer scans than the manual method while maintaining comparable detail and accuracy, demonstrating its potential for wider application in complex built environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Review of Dynamic Object Filtering in SLAM Based on 3D LiDAR.
- Author
-
Peng, Hongrui, Zhao, Ziyu, and Wang, Liguan
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *DRONE aircraft , *OPTICAL scanners - Abstract
SLAM (Simultaneous Localization and Mapping) based on 3D LiDAR (Laser Detection and Ranging) is an expanding field of research with numerous applications in the areas of autonomous driving, mobile robotics, and UAVs (Unmanned Aerial Vehicles). However, in most real-world scenarios, dynamic objects can negatively impact the accuracy and robustness of SLAM. In recent years, the challenge of achieving optimal SLAM performance in dynamic environments has led to the emergence of various research efforts, but there has been relatively little relevant review. This work delves into the development process and current state of SLAM based on 3D LiDAR in dynamic environments. After analyzing the necessity and importance of filtering dynamic objects in SLAM, this paper is developed from two dimensions. At the solution-oriented level, mainstream methods of filtering dynamic targets in 3D point cloud are introduced in detail, such as the ray-tracing-based approach, the visibility-based approach, the segmentation-based approach, and others. Then, at the problem-oriented level, this paper classifies dynamic objects and summarizes the corresponding processing strategies for different categories in the SLAM framework, such as online real-time filtering, post-processing after the mapping, and Long-term SLAM. Finally, the development trends and research directions of dynamic object filtering in SLAM based on 3D LiDAR are discussed and predicted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Angle Assessment for Upper Limb Rehabilitation: A Novel Light Detection and Ranging (LiDAR)-Based Approach.
- Author
-
Klein, Luan C., Chellal, Arezki Abderrahim, Grilo, Vinicius, Braun, João, Gonçalves, José, Pacheco, Maria F., Fernandes, Florbela P., Monteiro, Fernando C., and Lima, José
- Subjects
- *
OPTICAL radar , *LIDAR , *IMAGE encryption , *ANGLES , *INSPECTION & review , *POINT cloud - Abstract
The accurate measurement of joint angles during patient rehabilitation is crucial for informed decision making by physiotherapists. Presently, visual inspection stands as one of the prevalent methods for angle assessment. Although it could appear the most straightforward way to assess the angles, it presents a problem related to the high susceptibility to error in the angle estimation. In light of this, this study investigates the possibility of using a new approach to angle calculation: a hybrid approach leveraging both a camera and LiDAR technology, merging image data with point cloud information. This method employs AI-driven techniques to identify the individual and their joints, utilizing the cloud-point data for angle computation. The tests, considering different exercises with different perspectives and distances, showed a slight improvement compared to using YOLO v7 for angle calculation. However, the improvement comes with higher system costs when compared with other image-based approaches due to the necessity of equipment such as LiDAR and a loss of fluidity during the exercise performance. Therefore, the cost–benefit of the proposed approach could be questionable. Nonetheless, the results hint at a promising field for further exploration and the potential viability of using the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. High Resolution 3D Model of Heritage Landscapes Using UAS LiDAR: The Tajos de Alhama de Granada, Spain.
- Author
-
Vílchez-Lara, María del Carmen, Molinero-Sánchez, Jorge Gabriel, Rodríguez-Moreno, Concepción, Gómez-Blanco, Antonio Jesús, and Reinoso-Gordo, Juan Francisco
- Subjects
OPTICAL radar ,LIDAR ,CULTURAL landscapes ,INTERNATIONAL visitors ,LANDSCAPES ,LANDSCAPE assessment - Abstract
The Tajos de Alhama de Granada, which since ancient times have inspired and surprised locals and strangers, especially foreign travelers, constituted a unique landscape, cultural and ethnological heritage of Spain, linked to water and its old flour mills. And, they are currently at serious risk of degradation. The aim of this research is to obtain a high-resolution 3D model capable of documenting this historical heritage environment with a high level of detail, using a methodology that includes small light weight LiDAR (Light Detection and Ranging) system for UAS (Unmanned Aircraft System). The model obtained should serve, on the one hand, as a valuable tool for knowledge and analysis of all the elements (river, lake, ditches, dams, mills, aqueducts, and paths) that made up this place, registered as a picturesque landscape for its extraordinary beauty and uniqueness, and on the other hand, as a basis for the development of rehabilitation and architectural restoration projects that would have to be undertaken to preserve this cultural and landscape legacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Unmanned Aerial Vehicle–Light Detection and Ranging-Based Individual Tree Segmentation in Eucalyptus spp. Forests: Performance and Sensitivity.
- Author
-
Yan, Yan, Lei, Jingjing, Jin, Jia, Shi, Shana, and Huang, Yuqing
- Subjects
EUCALYPTUS ,OPTICAL radar ,LIDAR ,FOREST surveys ,DRONE aircraft ,POINT cloud - Abstract
As an emerging powerful tool for forest resource surveys, the unmanned aerial vehicle (UAV)-based light detection and ranging (LiDAR) sensors provide an efficient way to detect individual trees. Therefore, it is necessary to explore the most suitable individual tree segmentation algorithm and analyze the sensitivity of the parameter setting to determine the optimal parameters, especially for the Eucalyptus spp. forest, which is one of the most important hardwood plantations in the world. In the study, four methods were employed to segment individual Eucalyptus spp. plantations from normalized point cloud data and canopy height model generated from the original UAV-LiDAR data. And the parameter sensitivity of each segmentation method was analyzed to obtain the optimal parameter setting according to the extraction accuracy. The performance of the segmentation result was assessed by three indices including detection rate, precision, and overall correctness. The results indicated that the watershed algorithm performed better than other methods as the highest overall correctness (F = 0.761) was generated from this method. And the segmentation methods based on the canopy height model performed better than those based on normalized point cloud data. The detection rate and overall correctness of low-density plots were better than high-density plots, while the precision was reversed. Forest structures and individual wood characteristics are important factors influencing the parameter sensitivity. The performance of segmentation was improved by optimizing the key parameters of the different algorithms. With optimal parameters, different segmentation methods can be used for different types of Eucalyptus plots to achieve a satisfying performance. This study can be applied to accurate measurement and monitoring of Eucalyptus plantation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Enhancing Tree Species Identification in Forestry and Urban Forests through Light Detection and Ranging Point Cloud Structural Features and Machine Learning.
- Author
-
Rust, Steffen and Stoinski, Bernhard
- Subjects
URBAN forestry ,OPTICAL radar ,FORESTS & forestry ,LIDAR ,OPTICAL scanners ,POINT cloud - Abstract
As remote sensing transforms forest and urban tree management, automating tree species classification is now a major challenge to harness these advances for forestry and urban management. This study investigated the use of structural bark features from terrestrial laser scanner point cloud data for tree species identification. It presents a novel mathematical approach for describing bark characteristics, which have traditionally been used by experts for the visual identification of tree species. These features were used to train four machine learning algorithms (decision trees, random forests, XGBoost, and support vector machines). These methods achieved high classification accuracies between 83% (decision tree) and 96% (XGBoost) with a data set of 85 trees of four species collected near Krakow, Poland. The results suggest that bark features from point cloud data could significantly aid species identification, potentially reducing the amount of training data required by leveraging centuries of botanical knowledge. This computationally efficient approach might allow for real-time species classification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Advanced 3D Navigation System for AGV in Complex Smart Factory Environments.
- Author
-
Li, Yiduo, Wang, Debao, Li, Qipeng, Cheng, Guangtao, Li, Zhuoran, and Li, Peiqing
- Subjects
AUTOMATED guided vehicle systems ,GRIDS (Cartography) ,OPTICAL radar ,MOBILE robots ,POINT cloud - Abstract
The advancement of Industry 4.0 has significantly propelled the widespread application of automated guided vehicle (AGV) systems within smart factories. As the structural diversity and complexity of smart factories escalate, the conventional two-dimensional plan-based navigation systems with fixed routes have become inadequate. Addressing this challenge, we devised a novel mobile robot navigation system encompassing foundational control, map construction positioning, and autonomous navigation functionalities. Initially, employing point cloud matching algorithms facilitated the construction of a three-dimensional point cloud map within indoor environments, subsequently converted into a navigational two-dimensional grid map. Simultaneously, the utilization of a multi-threaded normal distribution transform (NDT) algorithm enabled precise robot localization in three-dimensional settings. Leveraging grid maps and the robot's inherent localization data, the A* algorithm was utilized for global path planning. Moreover, building upon the global path, the timed elastic band (TEB) algorithm was employed to establish a kinematic model, crucial for local obstacle avoidance planning. This research substantiated its findings through simulated experiments and real vehicle deployments: Mobile robots scanned environmental data via laser radar and constructing point clouds and grid maps. This facilitated centimeter-level localization and successful circumvention of static obstacles, while simultaneously charting optimal paths to bypass dynamic hindrances. The devised navigation system demonstrated commendable autonomous navigation capabilities. Experimental evidence showcased satisfactory accuracy in practical applications, with positioning errors of 3.6 cm along the x-axis, 3.3 cm along the y-axis, and 4.3° in orientation. This innovation stands to substantially alleviate the low navigation precision and sluggishness encountered by AGV vehicles within intricate smart factory environments, promising a favorable prospect for practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Quantitative Detection Technology for Geometric Deformation of Pipelines Based on LiDAR.
- Author
-
Zhao, Min, Fang, Zehao, Ding, Ning, Li, Nan, Su, Tengfei, and Qian, Huihuan
- Subjects
- *
OPTICAL scanners , *PIPELINE inspection , *UNDERGROUND pipelines , *LIDAR , *BURIED pipes (Engineering) , *CLOSED-circuit television , *OPTICAL radar , *SPACE-based radar - Abstract
This paper introduces a novel method for enhancing underground pipeline inspection, specifically addressing limitations associated with traditional closed-circuit television (CCTV) systems. These systems, commonly used for capturing visual data of sewer system deformations, heavily rely on subjective human expertise, leading to limited accuracy in detection. Furthermore, their inability to perform quantitative analyses of deformation extent hampers overall inspection effectiveness. Our proposed method leverages laser point cloud data and employs a 3D scanner for objective detection of geometric deformations in underground pipe corridors. By utilizing this approach, we enable a quantitative assessment of blockage levels, offering a significant improvement over traditional CCTV-based methods. The key advantages of our method lie in its objectivity and quantification capabilities, ultimately enhancing detection reliability, accuracy, and overall inspection efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Velocity Estimation from LiDAR Sensors Motion Distortion Effect.
- Author
-
Haas, Lukas, Haider, Arsalan, Kastner, Ludwig, Zeh, Thomas, Poguntke, Tim, Kuba, Matthias, Schardt, Michael, Jakobi, Martin, and Koch, Alexander W.
- Subjects
- *
MOTION detectors , *OBJECT recognition (Computer vision) , *MOTION , *OPTICAL radar , *LIDAR , *RELATIVE velocity - Abstract
Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object's velocity and direction of motion in the sensor's field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s−1 and a two-sigma confidence interval of [ − 0.0008 m s−1, 0.0017 m s−1] for the axis-wise estimation of an object's relative velocity, and an RMSE of 0.0815 m s−1 and a two-sigma confidence interval of [ 0.0138 m s−1, 0.0170 m s−1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Airborne LiDAR Strip Adjustment Method Based on Point Clouds with Planar Neighborhoods.
- Author
-
Sun, Zhenxing, Zhong, Ruofei, Wu, Qiong, and Guo, Jiao
- Subjects
- *
POINT cloud , *LIDAR , *OPTICAL radar , *NEIGHBORHOODS , *URBAN planning - Abstract
Airborne light detection and ranging (LiDAR) data are increasingly used in various fields such as topographic mapping, urban planning, and emergency management. A necessary processing step in the application of airborne LiDAR data is the elimination of mismatch errors. This paper proposes a new method for airborne LiDAR strip adjustment based on point clouds with planar neighborhoods; this method is intended to eliminate errors in airborne LiDAR point clouds. Initially, standard pre-processing tasks such as denoising, ground separation, and resampling are performed on the airborne LiDAR point clouds. Subsequently, this paper introduces a unique approach to extract point clouds with planar neighborhoods which is designed to enhance the registration accuracy of the iterative closest point (ICP) algorithm within the context of airborne LiDAR point clouds. Following the registration of the point clouds using the ICP algorithm, tie points are extracted via a point-to-plane projection method. Finally, a strip adjustment calculation is executed using the extracted tie points, in accordance with the strip adjustment equation for airborne LiDAR point clouds that was derived in this study. Three sets of airborne LiDAR point cloud data were utilized in the experiment outlined in this paper. The results indicate that the proposed strip adjustment method can effectively eliminate mismatch errors in airborne LiDAR point clouds, achieving a registration accuracy and absolute accuracy of 0.05 m. Furthermore, this method's processing efficiency was more than five times higher than that of traditional methods such as ICP and LS3D. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Point-Rich: Enriching Sparse Light Detection and Ranging Point Clouds for Accurate Three-Dimensional Object Detection.
- Author
-
Zhang, Yanchao, Zheng, Yinuo, Zhu, Dingkun, Wu, Qiaoyun, Zeng, Hansheng, Gu, Lipeng, and Zhai, Xiangping Bryce
- Subjects
- *
OBJECT recognition (Computer vision) , *OPTICAL radar , *LIDAR , *POINT cloud - Abstract
LiDAR point clouds often suffer from sparsity and uneven distributions in outdoor scenes, leading to the poor performance of cutting-edge 3D object detectors. In this paper, we propose Point-Rich, which is designed to improve the performance of 3D object detection. Point-Rich consists of two key modules: HighDensity and HighLight. The HighDensity module addresses the issue of density imbalance by enhancing the point cloud density. The HighLight module leverages image semantic features to enrich the point clouds. Importantly, Point-Rich imposes no restrictions on the 3D object detection architecture and remains unaffected by feature or depth blur. The experimental results show that compared with the Pointpillars on the KITTI dataset, the mAP of Point-Rich under the bird's eyes view improves by 5.53% on average. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model.
- Author
-
Yoo, Suhong and Kim, Namhoon
- Subjects
POINT cloud ,OPTICAL radar ,LIDAR ,PINHOLE cameras ,LASER based sensors ,CAMERAS - Abstract
This study presents a methodology for the coarse alignment of light detection and ranging (LiDAR) point clouds, which involves estimating the position and orientation of each station using the pinhole camera model and a position/orientation estimation algorithm. Ground control points are obtained using LiDAR camera images and the point clouds are obtained from the reference station. The estimated position and orientation vectors are used for point cloud registration. To evaluate the accuracy of the results, the positions of the LiDAR and the target were measured using a total station, and a comparison was carried out with the results of semi-automatic registration. The proposed methodology yielded an estimated mean LiDAR position error of 0.072 m, which was similar to the semi-automatic registration value of 0.070 m. When the point clouds of each station were registered using the estimated values, the mean registration accuracy was 0.124 m, while the semi-automatic registration accuracy was 0.072 m. The high accuracy of semi-automatic registration is due to its capability for performing both coarse alignment and refined registration. The comparison between the point cloud with refined alignment using the proposed methodology and the point-to-point distance analysis revealed that the average distance was measured at 0.0117 m. Moreover, 99% of the points exhibited distances within the range of 0.0696 m. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †.
- Author
-
Hariya, Keigo, Inoshita, Hiroki, Yanase, Ryo, Yoneda, Keisuke, and Suganuma, Naoki
- Subjects
- *
OBJECT recognition (Computer vision) , *OPTICAL radar , *LIDAR , *DEEP learning , *POINT cloud - Abstract
Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR–camera fusion approach is widely acknowledged due to its ability to provide a richer source of information for object detection compared to methods that rely solely on individual sensors. Within the framework of the LiDAR–camera multistage fusion method, challenges arise in maintaining stable object recognition, especially under adverse conditions where object detection in camera images becomes challenging, such as during night-time or in rainy weather. In this research paper, we introduce "ExistenceMap-PointPillars", a novel and effective approach for 3D object detection that leverages information from multiple sensors. This approach involves a straightforward modification of the LiDAR-based 3D object detection network. The core concept of ExistenceMap-PointPillars revolves around the integration of pseudo 2D maps, which depict the estimated object existence regions derived from the fused sensor data in a probabilistic manner. These maps are then incorporated into a pseudo image generated from a 3D point cloud. Our experimental results, based on our proprietary dataset, demonstrate the substantial improvements achieved by ExistenceMap-PointPillars. Specifically, it enhances the mean Average Precision (mAP) by a noteworthy +4.19% compared to the conventional PointPillars method. Additionally, we conducted an evaluation of the network's response using Grad-CAM in conjunction with ExistenceMap-PointPillars, which exhibited a heightened focus on the existence regions of objects within the pseudo 2D map. This focus resulted in a reduction in the number of false positives. In summary, our research presents ExistenceMap-PointPillars as a valuable advancement in the field of 3D object detection, offering improved performance and robustness, especially in challenging environmental conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. A Version Control System for Point Clouds.
- Author
-
Ogayar-Anguita, Carlos J., López-Ruiz, Alfonso, Segura-Sánchez, Rafael J., and Rueda-Ruiz, Antonio J.
- Subjects
- *
POINT cloud , *OPTICAL radar , *LIDAR , *THIRD-party software - Abstract
This paper presents a novel version control system for point clouds, which allows the complete editing history of a dataset to be stored. For each intermediate version, this system stores only the information that changes with respect to the previous one, which is compressed using a new strategy based on several algorithms. It allows undo/redo functionality in memory, which serves to optimize the operation of the version control system. It can also manage changes produced from third-party applications, which makes it ideal to be integrated into typical Computer-Aided Design workflows. In addition to automated management of incremental versions of point cloud datasets, the proposed system has a much lower storage footprint than the manual backup approach for most common point cloud workflows, which is essential when working with LiDAR (Light Detection and Ranging) data in the context of spatial big data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. An Introduction to the Evaluation of Perception Algorithms and LiDAR Point Clouds Using a Copula-Based Outlier Detector.
- Author
-
Reis, Nuno, Machado da Silva, José, and Correia, Miguel Velhote
- Subjects
- *
DRIVER assistance systems , *LIDAR , *POINT cloud , *OPTICAL radar , *DETECTORS , *VISIBLE spectra - Abstract
The increased demand for and use of autonomous driving and advanced driver assistance systems has highlighted the issue of abnormalities occurring within the perception layers, some of which may result in accidents. Recent publications have noted the lack of standardized independent testing formats and insufficient methods with which to analyze, verify, and qualify LiDAR (Light Detection and Ranging)-acquired data and their subsequent labeling. While camera-based approaches benefit from a significant amount of long-term research, images captured through the visible spectrum can be unreliable in situations with impaired visibility, such as dim lighting, fog, and heavy rain. A redoubled focus upon LiDAR usage would combat these shortcomings; however, research involving the detection of anomalies and the validation of gathered data is few and far between when compared to its counterparts. This paper aims to contribute to expand the knowledge on how to evaluate LiDAR data by introducing a novel method with the ability to detect these patterns and complement other performance evaluators while using a statistical approach. Although it is preliminary, the proposed methodology shows promising results in the evaluation of an algorithm's confidence score, the impact that weather and road conditions may have on data, and fringe cases in which the data may be insufficient or otherwise unusable. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Synthetic Forest Stands and Point Clouds for Model Selection and Feature Space Comparison.
- Author
-
Bester, Michelle S., Maxwell, Aaron E., Nealey, Isaac, Gallagher, Michael R., Skowronski, Nicholas S., and McNeil, Brenden E.
- Subjects
- *
POINT cloud , *OPTICAL radar , *LIDAR , *SUPPORT vector machines , *K-nearest neighbor classification - Abstract
The challenges inherent in field validation data, and real-world light detection and ranging (lidar) collections make it difficult to assess the best algorithms for using lidar to characterize forest stand volume. Here, we demonstrate the use of synthetic forest stands and simulated terrestrial laser scanning (TLS) for the purpose of evaluating which machine learning algorithms, scanning configurations, and feature spaces can best characterize forest stand volume. The random forest (RF) and support vector machine (SVM) algorithms generally outperformed k-nearest neighbor (kNN) for estimating plot-level vegetation volume regardless of the input feature space or number of scans. Also, the measures designed to characterize occlusion using spherical voxels generally provided higher predictive performance than measures that characterized the vertical distribution of returns using summary statistics by height bins. Given the difficulty of collecting a large number of scans to train models, and of collecting accurate and consistent field validation data, we argue that synthetic data offer an important means to parameterize models and determine appropriate sampling strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Multi-Lidar System Localization and Mapping with Online Calibration.
- Author
-
Wang, Fang, Zhao, Xilong, Gu, Hengzhi, Wang, Lida, Wang, Siyu, and Han, Yi
- Subjects
OPTICAL radar ,INTELLIGENT transportation systems ,CALIBRATION ,TRAFFIC accidents ,POINT cloud ,ROAD safety measures - Abstract
Currently, the demand for automobiles is increasing, and daily travel is increasingly reliant on cars. However, accompanying this trend are escalating traffic safety issues. Surveys indicate that most traffic accidents stem from driver errors, both intentional and unintentional. Consequently, within the framework of vehicular intelligence, intelligent driving uses computer software to assist drivers, thereby reducing the likelihood of road safety incidents and traffic accidents. Lidar, an essential facet of perception technology, plays an important role in vehicle intelligent driving. In real-world driving scenarios, the detection range of a single laser radar is limited. Multiple laser radars can improve the detection range and point density, effectively mitigating state estimation degradation in unstructured environments. This, in turn, enhances the precision and accuracy of synchronous positioning and mapping. Nonetheless, the relationship governing pose transformation between multiple lidars is intricate. Over extended periods, perturbations arising from vibrations, temperature fluctuations, or collisions can compromise the initially converged external parameters. In view of these concerns, this paper introduces a system capable of concurrent multi-lidar positioning and mapping, as well as real-time online external parameter calibration. The method first preprocesses the original measurement data, extracts linear and planar features, and rectifies motion distortion. Subsequently, leveraging degradation factors, the convergence of the multi-lidar external parameters is detected in real time. When deterioration in external parameters is identified, the local map of the main laser radar and the feature point cloud of the auxiliary laser radar are associated to realize online calibration. This is succeeded by frame-to-frame matching according to the converged external parameters, culminating in laser odometer computation. Introducing ground constraints and loop closure detection constraints in the back-end optimization effectuates global estimated pose rectification. Concurrently, the feature point cloud is aligned with the global map, and map update is completed. Finally, experimental validation is conducted on data acquired from Chang'an University to substantiate the system's online calibration and positioning mapping accuracy, robustness, and real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Research on an Adaptive Method for the Angle Calibration of Roadside LiDAR Point Clouds.
- Author
-
Wen, Xin, Hu, Jiazun, Chen, Haiyu, Huang, Shichun, Hu, Haonan, and Zhang, Hui
- Subjects
- *
POINT cloud , *LIDAR , *OPTICAL radar , *ROADSIDE improvement , *EULER angles , *DOPPLER lidar , *CAMERA calibration - Abstract
Light Detection and Ranging (LiDAR), a laser-based technology for environmental perception, finds extensive applications in intelligent transportation. Deployed on roadsides, it provides real-time global traffic data, supporting road safety and research. To overcome accuracy issues arising from sensor misalignment and to facilitate multi-sensor fusion, this paper proposes an adaptive calibration method. The method defines an ideal coordinate system with the road's forward direction as the X-axis and the intersection line between the vertical plane of the X-axis and the road surface plane as the Y-axis. This method utilizes the Kalman filter (KF) for trajectory smoothing and employs the random sample consensus (RANSAC) algorithm for ground fitting, obtaining the projection of the ideal coordinate system within the LiDAR system coordinate system. By comparing the two coordinate systems and calculating Euler angles, the point cloud is angle-calibrated using rotation matrices. Based on measured data from roadside LiDAR, this paper validates the calibration method. The experimental results demonstrate that the proposed method achieves high precision, with calculated Euler angle errors consistently below 1.7%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Contribution of Geometric Feature Analysis for Deep Learning Classification Algorithms of Urban LiDAR Data.
- Author
-
Tarsha Kurdi, Fayez, Amakhchan, Wijdan, Gharineiat, Zahra, Boulaassal, Hakim, and El Kharki, Omar
- Subjects
- *
DEEP learning , *CLASSIFICATION algorithms , *MACHINE learning , *GEOMETRIC analysis , *OPTICAL radar , *LIDAR , *POINT cloud - Abstract
The use of a Machine Learning (ML) classification algorithm to classify airborne urban Light Detection And Ranging (LiDAR) point clouds into main classes such as buildings, terrain, and vegetation has been widely accepted. This paper assesses two strategies to enhance the effectiveness of the Deep Learning (DL) classification algorithm. Two ML classification approaches are developed and compared in this context. These approaches utilize the DL Pipeline Network (DLPN), which is tailored to minimize classification errors and maximize accuracy. The geometric features calculated from a point and its neighborhood are analyzed to select the features that will be used in the input layer of the classification algorithm. To evaluate the contribution of the proposed approach, five point-clouds datasets with different urban typologies and ground topography are employed. These point clouds exhibit variations in point density, accuracy, and the type of aircraft used (drone and plane). This diversity in the tested point clouds enables the assessment of the algorithm's efficiency. The obtained high classification accuracy between 89% and 98% confirms the efficacy of the developed algorithm. Finally, the results of the adopted algorithm are compared with both rule-based and ML algorithms, providing insights into the positioning of DL classification algorithms among other strategies suggested in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. A Scalable Method to Improve Large-Scale Lidar Topographic Differencing Results.
- Author
-
Jung, Minyoung and Jung, Jinha
- Subjects
- *
OPTICAL scanners , *AIRBORNE-based remote sensing , *LIDAR , *OPTICAL radar , *DIGITAL elevation models , *POINT cloud - Abstract
Differencing digital terrain models (DTMs) generated from multitemporal airborne light detection and ranging (lidar) data provide accurate and detailed information about three-dimensional (3D) changes on the Earth. However, noticeable spurious errors along flight paths are often included in the differencing results, hindering the accurate analysis of the topographic changes. This paper proposes a new scalable method to alleviate the problematic systematic errors with a high degree of automation in consideration of the practical limitations raised when processing the rapidly increasing amount of large-scale lidar datasets. The proposed method focused on estimating the displacements caused by vertical positioning errors, which are the most critical error source, and adjusting the DTMs already produced as basic lidar products without access to the point cloud and raw data from the laser scanner. The feasibility and effectiveness of the proposed method were evaluated with experiments with county-level multitemporal airborne lidar datasets in Indiana, USA. The experimental results demonstrated that the proposed method could estimate the vertical displacement reasonably along the flight paths and improve the county-level lidar differencing results by reducing the problematic errors and increasing consistency across the flight paths. The improved differencing results presented in this paper are expected to provide more consistent information about topographic changes in Indiana. In addition, the proposed method can be a feasible solution to upcoming problems induced by rapidly increasing large-scale multitemporal lidar given recent active government-driven lidar data acquisition programs, such as the U.S. Geological Survey (USGS) 3D Elevation Program (3DEP). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Attitude Estimation Method for Target Ships Based on LiDAR Point Clouds via An Improved RANSAC.
- Author
-
Wei, Shengzhe, Xiao, Yuminghao, Yang, Xinde, and Wang, Hongdong
- Subjects
POINT cloud ,OPTICAL radar ,LIDAR ,MARITIME shipping ,SHIPS ,GEOSTATIONARY satellites - Abstract
The accurate attitude estimation of target ships plays a vital role in ensuring the safety of marine transportation, especially for tugs. A Light Detection and Ranging (LiDAR) system can generate 3D point clouds to describe the target ship's geometric features that possess attitude information. In this work, the authors put forward a new attitude-estimation framework that first extracts the geometric features (i.e., the board-side plane of a ship) using point clouds from shipborne LiDAR and then computes the attitude that is of interest (i.e., yaw and roll in this paper). To extract the board-side plane accurately on a moving ship with sparse point clouds, an improved Random Sample Consensus (RANSAC) algorithm with a pre-processing normal vector-based filter was designed to exclude noise points. A real water-pool experiment and two numerical tests were carried out to demonstrate the accuracy and general applicability of the attitude estimation of target ships brought by the improved RANSAC and estimation framework. The experimental results show that the average mean absolute errors of the angle and angular-rate estimation are 0.4879 deg and 4.2197 deg/s, respectively, which are 92.93% and 75.36% more accurate than the estimation based on standard RANSAC. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.