37,927 results on '"OPTICAL radar"'
Search Results
2. Unified Approach to Inshore Ship Detection in Optical/radar Medium Spatial Resolution Satellite Images
- Author
-
Popov, Mykhailo O., primary, Stankevich, Sergey A., additional, Pylypchuk, Valentyn V., additional, Xing, Kun, additional, and Zhang, Chunxiao, additional
- Published
- 2023
- Full Text
- View/download PDF
3. A 64 dBΩ, 25 Gb/s GFET based transimpedance amplifier with UWB resonator for optical radar detection in medical applications
- Author
-
Gorre, Pradeep, Vignesh, R., Song, Hanjung, and Kumar, Sandeep
- Published
- 2021
- Full Text
- View/download PDF
4. A Drone-Borne Optical&Radar Sensor for Smart Counties Monitoring.
- Author
-
João Roberto Moreira Neto and Hugo E. Hernández-Figueroa
- Published
- 2022
- Full Text
- View/download PDF
5. Inferring fault structures and overburden depth in 3D from geophysical data using machine learning algorithms – A case study on the Fenelon gold deposit, Quebec, Canada.
- Author
-
Xu, Limin, Green, E. C. R., and Kelly, C.
- Subjects
- *
MACHINE learning , *OPTICAL radar , *LIDAR , *FAULT location (Engineering) , *SHEAR zones - Abstract
We apply a machine learning approach to automatically infer two key attributes – the location of fault or shear zone structures and the thickness of the overburden – in an 18 km2 study area within and surrounding the Archean Fenelon gold deposit in Quebec, Canada. Our approach involves the inversion of carefully curated borehole lithological and structural observations truncated at 480 m below the surface, combined with magnetic and Light Detection and Ranging survey data. We take a computationally low‐cost approach in which no underlying model for geological consistency is imposed. We investigated three contrasting approaches: (1) an inferred fault model, in which the borehole observations represent a direct evaluation of the presence of fault or shear zones; (2) an inferred overburden model, using borehole observations on the overburden‐bedrock contact; (3) a model with three classes – overburden, faulted bedrock and unfaulted bedrock, which combines aspects of (1) and (2). In every case, we applied all 32 standard machine learning algorithms. We found that Bagged Trees, fine K‐nearest neighbours and weighted K‐nearest neighbour were the most successful, producing similar accuracy, sensitivity and specificity metrics. The Bagged Trees algorithm predicted fault locations with approximately 80% accuracy, 70% sensitivity and 73% specificity. Overburden thickness was predicted with 99% accuracy, 77% sensitivity and 93% specificity. Qualitatively, fault location predictions compared well to independently construct geological interpretations. Similar methods might be applicable in other areas with good borehole coverage, providing that criteria used in borehole logging are closely followed in devising classifications for the machine learning training set and might be usefully supplemented with a variety of geophysical survey data types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Three-Dimensional Wireframe Reconstruction for Non-Manhattan-Shaped Point Clouds.
- Author
-
Chuang, Tzu-Yi, Ng, Hui-Yin, and Hsieh, Yo-Ming
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *PYRAMIDS , *PRISMS - Abstract
This study proposes a feature relationship algorithm (FRA) to reconstruct three-dimensional wireframes of objects with non-Manhattan shapes using segmented point clouds. Instead of relying on extracting target boundaries, the FRA systematically identifies the vertex and edge nodes of objects and uses an innovative linking strategy to reconstruct a precise wireframe based on the point cloud geometry, even in the presence of data gaps. The FRA exhibits adaptability to various shapes, including curves, cones, pyramids, cylinders, octagonal prisms, and combinations. Validations on synthetic data provide valuable insights into the FRA's parameter tuning and exceptional shape accuracy. On the other hand, the use of Light Detection and Ranging scans and benchmark data underscores the FRA's fidelity in representing shapes from point clouds and demonstrates its improvement over baseline methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Criteria for selection of point-ahead angle compensation strategy in intersatellite laser ranging interferometry of TianQin based on research of tilt-to-length coupling noise.
- Author
-
Wang, Jinmeng, Zhu, Fan, and Yeh, Hsien-Chi
- Subjects
- *
OPTICAL radar , *LASER ranging , *LASER beams , *GRAVITATIONAL waves , *RELATIVE motion - Abstract
TianQin is expected to deploy three spacecraft in Earth orbit at the altitude of about 1 × 108 m to form an equilateral triangular constellation with arm lengths of about 1.7 × 108 m. It is aimed at detecting gravitational waves in the frequency range from 0.1 mHz to 1 Hz by means of high sensitive laser ranging interferometry with pathlength measurement noise below 1 pm/Hz1/2. The long-range propagation of beam in the intersatellite laser ranging interferometry between the three spacecraft results in a non-negligible time delay of 0.58 s. Therefore, there is an angular difference between the optimal outgoing and incoming laser beam of laser ranging interferometry on each spacecraft, which is denoted as the point-ahead angle. Due to the relative motion between the three spacecraft, the point-ahead angle of TianQin constellation is approximately 22.96 μrad with a dynamic variation range of ±24.71 nrad. Point-ahead-angle mechanism is a fine steering mirror used for point-ahead angle compensation to diminish the beam pointing error, thus to maintain the intersatellite laser link as well as to diminish the tilt-to-length coupling noise, which are vital to the gravitational wave detection. Both the static and dynamic point-ahead angle compensation strategies can be used in TianQin and the selection of the two is based on minimizing the total point-ahead-angle-related tilt-to-length coupling noise. In this paper, a criteria for selection of the two strategies are proposed by establishing a model of point-ahead-angle-related tilt-to-length coupling noise as well as a corresponding evaluation function for calculation and comparison. The results indicate that the dynamic compensation strategy is the optimal choice only when the first-order tilt-to-length coupling coefficient of point-ahead-angle mechanism is less than 6.4 × 10−9 m/rad, otherwise the static compensation strategy should be used. The proposed criteria, as well as the model and the requirement for point-ahead-angle mechanism, can serve as a common theoretical basis in laser ranging interferometry applications. • Criteria for selection of point-ahead angle compensation strategies. • Minimize tilt-to-length coupling noise in intersatellite laser interferometry. • Quantitative comparison of tilt-to-length noise in different compensation strategies. • Static compensation strategy is more appropriate in TianQin according to the criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Graph-based adaptive weighted fusion SLAM using multimodal data in complex underground spaces.
- Author
-
Lin, Xiaohu, Yang, Xin, Yao, Wanqiang, Wang, Xiqi, Ma, Xiongwei, and Ma, Bolin
- Subjects
- *
OPTICAL radar , *LIDAR , *STANDARD deviations , *UNDERGROUND areas , *SUBWAY tunnels - Abstract
Accurate and robust simultaneous localization and mapping (SLAM) is essential for autonomous exploration, unmanned transportation, and emergency rescue operations in complex underground spaces. However, the demanding conditions of underground spaces, characterized by poor lighting, weak textures, and high dust levels, pose substantial challenges to SLAM. To address this issue, we propose a graph-based adaptive weighted fusion SLAM (AWF-SLAM) for autonomous robots to achieve accurate and robust SLAM in complex underground spaces. First, a contrast limited adaptive histogram equalization (CLAHE) that combined adaptive gamma correction with weighting distribution (AGCWD) in hue, saturation, and value (HSV) space is proposed to enhance the brightness and contrast of visual images in underground spaces. Then, the performance of each sensor is evaluated using a consistency check based on the Mahalanobis distance to select the optimal configuration for specific conditions. Subsequently, we elaborate an adaptive weighting function model, which leverages the residuals from point cloud matching and the inner point rate of image matching. This model fuses data from light detection and ranging (LiDAR), inertial measurement unit (IMU), and cameras dynamically, enhancing the flexibility of the fusion process. Finally, multiple primitive features are adaptively fused within the factor graph optimization, utilizing a sliding window approach. Extensive experiments were conducted to check the performance of AWF-SLAM using a self-designed mobile robot in underground parking lots, excavated subway tunnels, and complex underground coal mine spaces based on reference trajectories and reconstructions provided by state-of-the-art methods. Satisfactorily, the root mean square error (RMSE) of trajectory translation is only 0.17 m, and the mean relative robustness distance between the point cloud maps reconstructed by AWF-SLAM and the reference point cloud map is lower than 0.09 m. These results indicate a substantial improvement in the accuracy and robustness of SLAM in complex underground spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. HHU24SWDSCS: A shallow-water depth model over island areas in South China Sea retrieved from Satellite-derived bathymetry.
- Author
-
Wu, Yihao, Shi, Hongkai, Jia, Dongzhen, Andersen, Ole Baltazar, He, Xiufeng, Luo, Zhicai, Li, Yu, Chen, Shiyuan, Si, Xiaohuan, Diao, Sisu, Shi, Yihuang, and Chen, Yanglin
- Subjects
- *
OPTICAL radar , *LIDAR , *STANDARD deviations , *MULTISPECTRAL imaging , *MARITIME safety - Abstract
Accurate shallow-water depth information for island areas is crucial for maritime safety, resource exploration, ecological conservation, and offshore economic activity. Traditional approaches like shipborne sounding and airborne bathymetric light detection and ranging (LiDAR) surveys are expensive, time-consuming, and are limited in politically sensitive regions. Moreover, satellite altimetry-predicted depths exhibit large errors over shallow waters. In contrast, satellite-derived bathymetry (SDB), estimated from multispectral imagery, provides a rapid, open source, and cost-effective technique to fully characterize the bathymetry of a region. Given the scarcity of in-situ water-depth data for the South China Sea (SCS), a shallow-water depth model, HHU24SWDSCS, was developed by integrating 1298 Ice, Cloud, and land Elevation Satellite (ICESat-2) tracks with 70 Sentinel-2 multispectral images. The model covers >120 islands and reefs in the SCS, with a resolution of 10 m. Validation against independent ICESat-2 depth data produced a root mean square error for the model of 0.81–1.35 m (<5 % of the maximum depth), with an average coefficient of determination of 0.91. Validation against independent airborne LiDAR bathymetry data revealed an accuracy of 1.01 m for the Lingyang Reef. Further comparisons with existing bathymetry models revealed the superior performance of the model. While the existing bathymetry models exhibit errors up to tens of meters or larger for island regions, and should therefore be used with caution, the HHU24SWDSCS model exhibited good accuracy in shallow waters across the SCS. This model thus provides a reference for mapping shallow-water depth close to islands and provides fundamental support for research in oceanography, geodesy, and other disciplines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Designing LiDAR‐Detectable Dark‐Tone Materials with High Near‐Infrared Reflectivity for Autonomous Driving: A Comprehensive Review.
- Author
-
Otgonbayar, Zambaga, Kim, Jiwon, Sa, Minki, Lee, Hwa Sung, Noh, Jungchul, and Yoon, Chang‐Min
- Subjects
- *
OPTICAL radar , *LIDAR , *OBJECT recognition (Computer vision) , *ORGANIC dyes , *USED cars - Abstract
Autonomous driving relies on the precise recognition of objects using light detection and ranging (LiDAR) technology, that operates at a specific wavelength of 905 nm. Black objects, such as carbon black used in vehicle coating, tend to absorb this specific wavelength significantly, which limits the performance of LiDAR sensors. To address this issue, researchers have explored creating dark‐toned materials that can be detected by LiDAR with high NIR reflectivity while maintaining a true blackness (
L * < 20 based on the CIE color coordinates). These materials fall into two categories: organic and inorganic pigments. Organic pigments can be synthetically adjusted to achieve true blackness by manipulating their functional groups, but achieving high NIR reflectivity remains challenging, often requiring a bilayer structure with NIR‐reflective white base and an upper layer of organic black pigments. Additionally, the need for hydrophobic additives and resistance to degradation from sunlight further restricts their use. In the case of inorganic pigments, the desired LiDAR‐detectable properties can be obtained through careful control of their composition, structure, and morphology, allowing for single‐layer coatings with appropriate design. This review highlights recent advancements in developing organic and inorganic LiDAR‐detectable black pigments and outlines future material design strategies for autonomous vehicle systems. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
11. Neural Approach to Coordinate Transformation for LiDAR–Camera Data Fusion in Coastal Observation.
- Author
-
Garczyńska-Cyprysiak, Ilona, Kazimierski, Witold, and Włodarczyk-Sielicka, Marta
- Subjects
- *
OPTICAL radar , *LIDAR , *MULTISENSOR data fusion , *COORDINATE transformations , *ROOT-mean-squares , *RADIAL basis functions - Abstract
The paper presents research related to coastal observation using a camera and LiDAR (Light Detection and Ranging) mounted on an unmanned surface vehicle (USV). Fusion of data from these two sensors can provide wider and more accurate information about shore features, utilizing the synergy effect and combining the advantages of both systems. Fusion is used in autonomous cars and robots, despite many challenges related to spatiotemporal alignment or sensor calibration. Measurements from various sensors with different timestamps have to be aligned, and the measurement systems need to be calibrated to avoid errors related to offsets. When using data from unstable, moving platforms, such as surface vehicles, it is more difficult to match sensors in time and space, and thus, data acquired from different devices will be subject to some misalignment. In this article, we try to overcome these problems by proposing the use of a point matching algorithm for coordinate transformation for data from both systems. The essence of the paper is to verify algorithms based on selected basic neural networks, namely the multilayer perceptron (MLP), the radial basis function network (RBF), and the general regression neural network (GRNN) for the alignment process. They are tested with real recorded data from the USV and verified against numerical methods commonly used for coordinate transformation. The results show that the proposed approach can be an effective solution as an alternative to numerical calculations, due to process improvement. The image data can provide information for identifying characteristic objects, and the obtained accuracies for platform dynamics in the water environment are satisfactory (root mean square error—RMSE—smaller than 1 m in many cases). The networks provided outstanding results for the training set; however, they did not perform as well as expected, in terms of the generalization capability of the model. This leads to the conclusion that processing algorithms cannot overcome the limitations of matching point accuracy. Further research will extend the approach to include information on the position and direction of the vessel. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. An 8 × 8 CMOS Optoelectronic Readout Array of Short-Range LiDAR Sensors.
- Author
-
Chon, Yeojin, Choi, Shinhae, Joo, Jieun, and Park, Sung-Min
- Subjects
- *
OPTICAL radar , *LIDAR , *AVALANCHE photodiodes , *COMPLEMENTARY metal oxide semiconductors , *OPTICAL measurements - Abstract
This paper presents an 8 × 8 channel optoelectronic readout array (ORA) realized in the TSMC 180 nm 1P6M RF CMOS process for the applications of short-range light detection and ranging (LiDAR) sensors. We propose several circuit techniques in this work, including an amplitude-to-voltage (A2V) converter that reduces the notorious walk errors by intensity compensation and a time-to-voltage (T2V) converter that acquires the linear slope of the output signals by exploiting a charging circuit, thus extending the input dynamic range significantly from 5 μApp to 1.1 mApp, i.e., 46.8 dB. These results correspond to the maximum detection range of 8.2 m via the action of the A2V converter and the minimum detection range of 56 cm with the aid of the proposed T2V converter. Optical measurements utilizing an 850 nm laser diode confirm that the proposed 8 × 8 ORA with 64 on-chip avalanche photodiodes (APDs) can successfully recover the narrow 5 ns light pulses even at the shortest distance of 56 cm. Hence, this work provides a potential CMOS solution for low-cost, low-power, short-range LiDAR sensors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Remote Sensing LiDAR and Hyperspectral Classification with Multi-Scale Graph Encoder–Decoder Network.
- Author
-
Wang, Fang, Du, Xingqian, Zhang, Weiguang, Nie, Liang, Wang, Hu, Zhou, Shun, and Ma, Jun
- Subjects
- *
OPTICAL radar , *LIDAR , *REMOTE sensing , *FEATURE extraction , *DEEP learning - Abstract
The rapid development of sensor technology has made multi-modal remote sensing data valuable for land cover classification due to its diverse and complementary information. Many feature extraction methods for multi-modal data, combining light detection and ranging (LiDAR) and hyperspectral imaging (HSI), have recognized the importance of incorporating multiple spatial scales. However, effectively capturing both long-range global correlations and short-range local features simultaneously on different scales remains a challenge, particularly in large-scale, complex ground scenes. To address this limitation, we propose a multi-scale graph encoder–decoder network (MGEN) for multi-modal data classification. The MGEN adopts a graph model that maintains global sample correlations to fuse multi-scale features, enabling simultaneous extraction of local and global information. The graph encoder maps multi-modal data from different scales to the graph space and completes feature extraction in the graph space. The graph decoder maps the features of multiple scales back to the original data space and completes multi-scale feature fusion and classification. Experimental results on three HSI-LiDAR datasets demonstrate that the proposed MGEN achieves considerable classification accuracies and outperforms state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Three-Dimensional Geometric-Physical Modeling of an Environment with an In-House-Developed Multi-Sensor Robotic System.
- Author
-
Zhang, Su, Yu, Minglang, Chen, Haoyu, Zhang, Minchao, Tan, Kai, Chen, Xufeng, Wang, Haipeng, and Xu, Feng
- Subjects
- *
OPTICAL radar , *LIDAR , *GEOMETRICAL constructions , *MULTISENSOR data fusion , *OPTICAL images , *SYNTHETIC aperture radar , *MULTISPECTRAL imaging - Abstract
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral camera to acquire the electromagnetic characteristics and material categories of the target environment and simultaneously employs light detection and ranging (LiDAR) and an optical camera to achieve a three-dimensional spatial reconstruction of the environment. Specifically, the millimeter-wave radar sensor adopts a multiple input multiple output (MIMO) array and obtains 3D synthetic aperture radar images through 1D mechanical scanning perpendicular to the array, thereby capturing the electromagnetic properties of the environment. The multispectral camera, equipped with nine channels, provides rich spectral information for material identification and clustering. Additionally, LiDAR is used to obtain a 3D point cloud, combined with the RGB images captured by the optical camera, enabling the construction of a three-dimensional geometric model. By fusing the data from four sensors, a comprehensive geometric-physical model of the environment can be constructed. Experiments conducted in indoor environments demonstrated excellent spatial-geometric-physical reconstruction results. This system can play an important role in various applications, such as environment modeling and planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. An Improved Adaptive Grid-Based Progressive Triangulated Irregular Network Densification Algorithm for Filtering Airborne LiDAR Data.
- Author
-
Zheng, Jinjun, Xiang, Man, Zhang, Tao, and Zhou, Ji
- Subjects
- *
OPTICAL radar , *LIDAR , *FALSE positive error , *SELECTION (Plant breeding) , *POINT cloud - Abstract
Ground filtering is crucial for airborne Light Detection and Ranging (LiDAR) data post-processing. The progressive triangulated irregular network densification (PTD) algorithm and its variants outperform others in accuracy, stability, and robustness, using grid-based seed point selection, TIN construction, and iterative rules for ground point identification. However, these methods still face limitations in removing low points and accurately preserving terrain details, primarily due to their sensitivity to grid size. To overcome this issue, a novel PTD filtering algorithm based on an adaptive grid (AGPTD) was proposed. The main contributions of the proposed method include an outlier removal method using a radius outlier removal algorithm and Kd-tree, a method for establishing an adaptive two-level grid based on point cloud density and terrain slope, and an adaptive selection method for angle and distance thresholds in the iterative densification processing. The performance of the AGPTD algorithm was assessed based on widely used benchmark datasets. Results show that the AGPTD algorithm outperforms the classical PTD algorithm in retaining ground feature points, especially in reducing Type I error and average total error significantly. In comparison with other advanced algorithms developed in recent years, the novel algorithm showed the lowest average Type I error, the minimal average total error, and the greatest average Kappa coefficient, which were 1.11%, 2.28%, and 90.86%, respectively. Additionally, the average accuracy, precision, and recall of AGPTD were 97.69%, 97.52%, and 98.98%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. GASSF-Net: Geometric Algebra Based Spectral-Spatial Hierarchical Fusion Network for Hyperspectral and LiDAR Image Classification.
- Author
-
Wang, Rui, Ye, Xiaoxi, Huang, Yao, Ju, Ming, and Xiang, Wei
- Subjects
- *
OPTICAL radar , *IMAGE recognition (Computer vision) , *LIDAR , *FEATURE extraction , *REMOTE sensing - Abstract
The field of multi-source remote sensing observation is becoming increasingly dynamic through the integration of various remote sensing data sources. However, existing deep learning methods face challenges in differentiating between internal and external relationships and capturing fine spatial features. These models often struggle to effectively capture comprehensive information across remote sensing data bands, and they have inherent differences in the size, structure, and physical properties of different remote sensing datasets. To address these challenges, this paper proposes a novel geometric-algebra-based spectral–spatial hierarchical fusion network (GASSF-Net), which uses geometric algebra for the first time to process multi-source remote sensing images, enabling a more holistic approach to handling these images by simultaneously leveraging the real and imaginary components of geometric algebra to express structural information. This method captures the internal and external relationships between remote sensing image features and spatial information, effectively fusing the features of different remote sensing data to improve classification accuracy. GASSF-Net uses geometric algebra (GA) to represent pixels from different bands as multivectors, thus capturing the intrinsic relationships between spectral bands while preserving spatial information. The network begins by deeply mining the spectral–spatial features of a hyperspectral image (HSI) using pairwise covariance operators. These features are then extracted through two branches: a geometric-algebra-based branch and a real-valued network branch. Additionally, the geometric-algebra-based network extracts spatial information from light detection and ranging (LiDAR) to complement the elevation data lacking in the HSI. Finally, a genetic-algorithm-based cross-fusion module is introduced to fuse the HSI and LiDAR data for improved classification. Experiments conducted on three well-known datasets, Trento, MUUFL, and Houston, demonstrate that GASSF-Net significantly outperforms traditional methods in terms of classification accuracy and model efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Radio meteor velocity estimation based on the Fourier transform.
- Author
-
Korotyshkin, Dmitry
- Subjects
- *
OPTICAL radar , *OPTICAL measurements , *SIGNAL-to-noise ratio , *FOURIER analysis , *ESTIMATION theory - Abstract
A technique for estimating the velocity of radio meteors is proposed. The presented method is based on the analysis of the phase of the Fourier spectra of complex amplitudes of signals reflected from meteor trails. The method is very fast compared to the Fresnel transform method, works at low signal-to-noise ratios and is automatic. Approaches for increasing the signal-to-noise ratio and criteria for rejecting the obtained estimates are proposed. An error of 1–10% was achieved, depending on the signal-to-noise ratio (from −5 dB) and the velocity of the meteor. Comparison of meteoroid velocities from Kazan Federal University meteor radar measurements with optical observations showed good agreement of estimates with a slight underestimation of 2–8%, presumably due to the neglect of deceleration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Efficient 3D robotic mapping and navigation method in complex construction environments.
- Author
-
Ren, Tianyu and Jebelli, Houtan
- Subjects
- *
DEEP reinforcement learning , *OPTICAL radar , *LIDAR , *NAUTICAL charts , *CONSTRUCTION industry - Abstract
Recent advancements in construction robotics have significantly transformed the construction industry by delivering safer and more efficient solutions for handling complex and hazardous tasks. Despite these innovations, ensuring safe robotic navigation in intricate indoor construction environments, such as attics, remains a significant challenge. This study introduces a robust 3‐dimensional (3D) robotic mapping and navigation method specifically tailored for these environments. Utilizing light detection and ranging, simultaneous localization and mapping, and neural networks, this method generates precise 3D maps. It also combines grid‐based pathfinding with deep reinforcement learning to enhance navigation and obstacle avoidance in dynamic and complex construction settings. An evaluation conducted in a simulated attic environment—characterized by various truss structures and continuously changing obstacles—affirms the method's efficacy. Compared to established benchmarks, this method not only achieves over 95% mapping accuracy but also improves navigation accuracy by 10% and boosts both efficiency and safety margins by over 30%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia.
- Author
-
Cappellazzo, Marco, Patrucco, Giacomo, and Spanò, Antonia
- Subjects
- *
OPTICAL radar , *LIDAR , *ARTIFICIAL intelligence , *INFORMATION science , *AIRBORNE lasers , *POINT cloud , *CULTURAL landscapes - Abstract
Remote Sensing (RS) and Geographic Information Science (GIS) techniques are powerful tools for spatial data collection, analysis, management, and digitization within cultural heritage frameworks. Despite their capabilities, challenges remain in automating data semantic classification for conservation purposes. To address this, leveraging airborne Light Detection And Ranging (LiDAR) point clouds, complex spatial analyses, and automated data structuring is crucial for supporting heritage preservation and knowledge processes. In this context, the present contribution investigates the latest Artificial Intelligence (AI) technologies for automating existing LiDAR data structuring, focusing on the case study of Sardinia coastlines. Moreover, the study preliminary addresses automation challenges in the perspective of historical defensive landscapes mapping. Since historical defensive architectures and landscapes are characterized by several challenging complexities—including their association with dark periods in recent history and chronological stratification—their digitization and preservation are highly multidisciplinary issues. This research aims to improve data structuring automation in these large heritage contexts with a multiscale approach by applying Machine Learning (ML) techniques to low-scale 3D Airborne Laser Scanning (ALS) point clouds. The study thus develops a predictive Deep Learning Model (DLM) for the semantic segmentation of sparse point clouds (<10 pts/m2), adaptable to large landscape heritage contexts and heterogeneous data scales. Additionally, a preliminary investigation into object-detection methods has been conducted to map specific fortification artifacts efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Map Construction and Positioning Method for LiDAR SLAM-Based Navigation of an Agricultural Field Inspection Robot.
- Author
-
Qu, Jiwei, Qiu, Zhinuo, Li, Lanyu, Guo, Kangquan, and Li, Dan
- Subjects
- *
OPTICAL radar , *LIDAR , *NAUTICAL charts , *GRAPH algorithms , *ENVIRONMENTAL mapping , *LOCALIZATION (Mathematics) - Abstract
In agricultural field inspection robots, constructing accurate environmental maps and achieving precise localization are essential for effective Light Detection And Ranging (LiDAR) Simultaneous Localization And Mapping (SLAM) navigation. However, navigating in occluded environments, such as mapping distortion and substantial cumulative errors, presents challenges. Although current filter-based algorithms and graph optimization-based algorithms are exceptionally outstanding, they exhibit a high degree of complexity. This paper aims to investigate precise mapping and localization methods for robots, facilitating accurate LiDAR SLAM navigation in agricultural environments characterized by occlusions. Initially, a LiDAR SLAM point cloud mapping scheme is proposed based on the LiDAR Odometry And Mapping (LOAM) framework, tailored to the operational requirements of the robot. Then, the GNU Image Manipulation Program (GIMP) is employed for map optimization. This approach simplifies the map optimization process for autonomous navigation systems and aids in converting the Costmap. Finally, the Adaptive Monte Carlo Localization (AMCL) method is implemented for the robot's positioning, using sensor data from the robot. Experimental results highlight that during outdoor navigation tests, when the robot operates at a speed of 1.6 m/s, the average error between the mapped values and actual measurements is 0.205 m. The results demonstrate that our method effectively prevents navigation mapping distortion and facilitates reliable robot positioning in experimental settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Lidar, Space, and Time in Archaeology: Promises and Challenges.
- Author
-
Inomata, Takeshi
- Subjects
- *
OPTICAL radar , *LIDAR , *FOREST canopies , *TEMPORAL integration , *CITIES & towns , *LANDSCAPE archaeology , *HISTORICAL archaeology - Abstract
Airborne lidar (light detection and ranging), which produces three-dimensional models of ground surfaces under the forest canopy, has become an important tool in archaeological research. On a microscale, lidar can lead to a new understanding of building shapes and orientations that were not recognized previously. On a medium scale, it can provide comprehensive views of settlements, cities, and polities and their relationships to the topography. It also facilitates studies of diverse land use practices, such as agricultural fields, roads, and canals. On a macroscale, lidar provides a means to comprehend broad spatial patterns beyond individual sites, including the implications of vacant spaces. A significant challenge for archaeologists is the integration of historical and temporal information in order to contextualize lidar data in the framework of landscape archaeology. In addition, a rapid increase in lidar data presents ethical issues, including the question of data ownership. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Deep learning framework with Local Sparse Transformer for construction worker detection in 3D with LiDAR.
- Author
-
Zhang, Mingyu, Wang, Lei, Han, Shuai, Wang, Shuyuan, and Li, Heng
- Subjects
- *
OPTICAL radar , *OBJECT recognition (Computer vision) , *LIDAR , *TRANSFORMER models , *DEEP learning , *FEATURE extraction - Abstract
Autonomous equipment is playing an increasingly important role in construction tasks. It is essential to equip autonomous equipment with powerful 3D detection capability to avoid accidents and inefficiency. However, there is limited research within the construction field that has extended detection to 3D. To this end, this study develops a light detection and ranging (LiDAR)‐based deep‐learning model for the 3D detection of workers on construction sites. The proposed model adopts a voxel‐based anchor‐free 3D object detection paradigm. To enhance the feature extraction capability for tough detection tasks, a novel Transformer‐based block is proposed, where the multi‐head self‐attention is applied in local grid regions. The detection model integrates the Transformer blocks with 3D sparse convolution to extract wide and local features while pruning redundant features in modified downsampling layers. To train and test the proposed model, a LiDAR point cloud dataset was created, which includes workers in construction sites with 3D box annotations. The experiment results indicate that the proposed model outperforms the baseline models with higher mean average precision and smaller regression errors. The method in the study is promising to provide worker detection with rich and accurate 3D information required by construction automation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Proposal of UAV-SLAM-Based 3D Point Cloud Map Generation Method for Orchards Measurements.
- Author
-
Nishiwaki, Soki, Kondo, Haruki, Yoshida, Shuhei, and Emaru, Takanori
- Subjects
- *
GLOBAL Positioning System , *OPTICAL radar , *LIDAR , *STANDARD deviations , *POINT cloud - Abstract
This paper proposes a method for generating highly accurate point cloud maps of orchards using an unmanned aerial vehicle (UAV) equipped with light detection and ranging (LiDAR). The point cloud captured by the UAV-LiDAR was converted to a geographic coordinate system using a global navigation satellite system / inertial measurement unit (GNSS/IMU). The converted point cloud is then aligned with the simultaneous localization and mapping (SLAM) technique. As a result, a 3D model of an orchard is generated in a low-cost and easy-to-use manner for pesticide application with precision. The method of direct point cloud alignment with real-time kinematic-global navigation satellite system (RTK-GNSS) had a root mean square error (RMSE) of 42 cm between the predicted and true crop height values, primarily due to the effects of GNSS multipath and vibration of automated vehicles. Contrastingly, our method demonstrated better results, with RMSE of 5.43 cm and 2.14 cm in the vertical and horizontal axes, respectively. The proposed method for predicting crop location successfully achieved the required accuracy of less than 1 m with errors not exceeding 30 cm in the geographic coordinate system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. 基于 Sentinel-1/2 数据融合的县域农业大棚提取.
- Author
-
张廷龙, 韩晓乐, 包懿, and 张青峰
- Subjects
- *
OPTICAL radar , *HIGH resolution imaging , *REMOTE sensing , *IMAGE fusion , *PRINCIPAL components analysis - Abstract
As a special feature type, agricultural greenhouses were relatively easy to recognize on high spatial resolution remote sensing images with high accuracy. However, most high spatial resolution images had to be purchased commercially, and their availability was limited. In order to improve the economy and convenience of precise extraction of county-level agricultural greenhouses, this study used free and easily available non high spatial resolution Sentinel-1 (radar) and Sentinel-2 (optical) remote sensing images for data fusion, combined with spectral index, texture extraction, principal component analysis and other methods, to construct a multidimensional feature set space, and adopted multiple classification and recognition methods (cases) to identify and extract county-level agricultural greenhouses. The results indicated that: 1) The county’s agricultural greenhouses could be extracted with high precision by utilizing just Sentinel-1/2 (10 m resolution) remote sensing images, bolstered by suitable classification techniques (case). 2) The fusion of Sentinel-1 (radar) and Sentinel-2 (optical) remote sensing data could be helpful for improving the recognition accuracy of agricultural greenhouses. Compared to using only Sentinel-2 (optical) remote sensing data, the overall accuracy of Sentinel-1/2 data fusion had improved by an average of 1.72 percentage points, with a maximum improvement of 3.29 percentage points; 3) Of all the classification and recognition techniques (case) applied in the paper, the object-oriented method performed well in areas with high greenhouse density; But in areas with low greenhouse density, the accuracy was mediocre and exhibited a strong dependence on region (or scene). After the fusion of optical and radar information, the pixel based recursive feature elimination random forest (RF-RFE) method can achieve an average accuracy of 96.45%, with high and stable accuracy and strong regional adaptability. It was suitable for accurate and efficient extraction of agricultural greenhouses from non high-resolution image in county-level. The technical solution proposed in this paper based on Sentinel-1/2 images, could be provided technical support for the economic, rapid, and efficient extraction of agricultural greenhouses in most counties. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Vehicle and Pedestrian Traffic Signal Performance Measures Using LiDAR-Derived Trajectory Data.
- Author
-
Saldivar-Carranza, Enrique D., Desai, Jairaj, Thompson, Andrew, Taylor, Mark, Sturdevant, James, and Bullock, Darcy M.
- Subjects
- *
TRAFFIC signs & signals , *OPTICAL radar , *LIDAR , *HIGHWAY capacity , *SIGNALIZED intersections , *PEDESTRIANS - Abstract
Light Detection and Ranging (LiDAR) sensors at signalized intersections can accurately track the movement of virtually all objects passing through at high sampling rates. This study presents methodologies to estimate vehicle and pedestrian traffic signal performance measures using LiDAR trajectory data. Over 15,000,000 vehicle and 170,000 pedestrian waypoints detected during a 24 h period at an intersection in Utah are analyzed to describe the proposed techniques. Sampled trajectories are linear referenced to generate Purdue Probe Diagrams (PPDs). Vehicle-based PPDs are used to estimate movement level turning counts, 85th percentile queue lengths (85QL), arrivals on green (AOG), highway capacity manual (HCM) level of service (LOS), split failures (SF), and downstream blockage (DSB) by time of day (TOD). Pedestrian-based PPDs are used to estimate wait times and the proportion of people that traverse multiple crosswalks. Although vehicle signal performance can be estimated from several days of aggregated connected vehicle (CV) data, LiDAR data provides the ability to measure performance in real time. Furthermore, LiDAR can measure pedestrian speeds. At the studied location, the 15th percentile pedestrian walking speed was estimated to be 3.9 ft/s. The ability to directly measure these pedestrian speeds allows agencies to consider alternative crossing times than those suggested by the Manual on Uniform Traffic Control Devices (MUTCD). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Improved Multi-Sensor Fusion Dynamic Odometry Based on Neural Networks.
- Author
-
Luo, Lishu, Peng, Fulun, and Dong, Longhui
- Subjects
- *
OPTICAL radar , *LIDAR , *MULTISENSOR data fusion , *AUTONOMOUS robots , *UNITS of measurement - Abstract
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast LiDAR (light detection and ranging)–inertial–visual odometry system, integrating neural networks with laser, camera, and inertial measurement unit modalities. The method first constructs visual–inertial and LiDAR–inertial odometry subsystems. Then, a lightweight neural network is used to remove dynamic elements from the visual part, and dynamic clustering is applied to the LiDAR part to eliminate dynamic environments, ensuring the reliability of the remaining environmental data. Validation of the datasets shows that the proposed multi-sensor fusion dynamic odometry can achieve high-precision pose estimation in complex dynamic environments with high continuity, reliability, and dynamic robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. High‐Performance Integrated Laser Based on Thin‐Film Lithium Niobate Photonics for Coherent Ranging.
- Author
-
Wang, Shuxin, Lin, Zhongjin, Wang, Qi, Zhang, Xian, Ma, Rui, and Cai, Xinlun
- Subjects
- *
OPTICAL radar , *LIDAR , *INDUSTRIAL robots , *LITHIUM niobate , *ENVIRONMENTAL monitoring - Abstract
Frequency‐modulated continuous‐wave (FMCW) light detection and ranging (LiDAR) has a huge potential for developing the next generation of LiDAR applied in autonomous driving, industrial automation, environmental monitoring, and so on. An ideal laser for the FMCW LiDAR system should simultaneously feature fast chirp repetition frequency, a large chirp bandwidth, high linearity, a compact footprint, and low‐cost. In this study, such a laser based on thin‐film lithium niobate (TFLN) photonics is proposed and demonstrated. The laser can achieve a chirp bandwidth of 3.44 GHz, a tuning efficiency of 574 MHz V−1, and a chirp rate of 3.44 × 107 GHz s−1, which are the best values compared with other TFLN‐based lasers. A FMCW LiDAR system built by their laser is also experimentally demonstrated, showing that it can achieve a ranging precision of 4.9 mm, a velocity precision of 0.054 m s−1, and a sampling rate of 5 MSa s−1. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Remote 3D Imaging and Classification of Pelagic Microorganisms with A Short‐Range Multispectral Confocal LiDAR.
- Author
-
Santos, Joaquim, Jakobsen, Hans H., Petersen, Paul M., and Pedersen, Christian
- Subjects
- *
MARINE ecosystem health , *OPTICAL radar , *LIDAR , *MULTISPECTRAL imaging , *IMAGE recognition (Computer vision) - Abstract
Plankton is essential to maintain healthy aquatic ecosystems since it influences the biological carbon pump globally. However, climate change‐induced alterations to oceanic properties threaten planktonic communities. It is therefore crucial to monitor their abundance to assess the health status of marine ecosystems. In situ optical tools unlock high‐resolution measurements of sub‐millimeter specimens, but state‐of‐the‐art underwater imaging techniques are limited to fixed and small close‐range volumes, requiring the instruments to be vertically dived. Here, a novel scanning multispectral confocal light detection and ranging (LiDAR) system for short‐range volumetric sensing in aquatic media is introduced. The system expands the inelastic confocal principle to multiple wavelength channels, allowing the acquisition of 4D point clouds combining near‐diffraction limited morphological and spectroscopic data that is used to train artificial intelligence (AI) models. Volumetric mapping and classification of microplastics is demonstrated to sort them by color and size. Furthermore, in vivo autofluorescence is resolved from a community of free‐swimming zooplankton and microalgae, and accurate spectral identification of different genera is accomplished. The deployment of this photonic platform alongside AI models overcomes the complex and subjective task of manual plankton identification and enables non‐intrusive sensing from fixed vantage points, thus constituting a unique tool for underwater environmental monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Genetic Algorithm Empowering Unsupervised Learning for Optimizing Building Segmentation from Light Detection and Ranging Point Clouds.
- Author
-
Sulaiman, Muhammad, Farmanbar, Mina, Belbachir, Ahmed Nabil, and Rong, Chunming
- Subjects
- *
OPTICAL radar , *LIDAR , *GENETIC algorithms , *POINT cloud , *REMOTE sensing - Abstract
This study investigates the application of LiDAR point cloud datasets for building segmentation through a combined approach that integrates unsupervised segmentation with evolutionary optimization. The research evaluates the extent of improvement achievable through genetic algorithm (GA) optimization for LiDAR point cloud segmentation. The unsupervised methodology encompasses preprocessing, adaptive thresholding, morphological operations, contour filtering, and terrain ruggedness analysis. A genetic algorithm was employed to fine-tune the parameters for these techniques. Critical tunable parameters, such as the interpolation method for DSM and DTM generation, scale factor for contrast enhancement, adaptive constant and block size for adaptive thresholding, kernel size for morphological operations, squareness threshold to maintain the shape of predicted objects, and terrain ruggedness index (TRI) were systematically optimized. The study presents the top ten chromosomes with optimal parameter values, demonstrating substantial improvements of 29% in the average intersection over union (IoU) score (0.775) on test datasets. These findings offer valuable insights into LiDAR-based building segmentation, highlighting the potential for increased precision and effectiveness in future applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Current Status of Remote Sensing for Studying the Impacts of Hurricanes on Mangrove Forests in the Coastal United States.
- Author
-
Dutta Roy, Abhilash, Karpowicz, Daria Agnieszka, Hendy, Ian, Rog, Stefanie M., Watt, Michael S., Reef, Ruth, Broadbent, Eben North, Asbridge, Emma F., Gebrie, Amare, Ali, Tarig, and Mohan, Midhun
- Subjects
- *
REMOTE sensing , *OPTICAL radar , *REMOTE-sensing images , *LIDAR , *COASTAL zone management , *MANGROVE ecology , *STORM surges , *HURRICANES - Abstract
Hurricane incidents have become increasingly frequent along the coastal United States and have had a negative impact on the mangrove forests and their ecosystem services across the southeastern region. Mangroves play a key role in providing coastal protection during hurricanes by attenuating storm surges and reducing erosion. However, their resilience is being increasingly compromised due to climate change through sea level rises and the greater intensity of storms. This article examines the role of remote sensing tools in studying the impacts of hurricanes on mangrove forests in the coastal United States. Our results show that various remote sensing tools including satellite imagery, Light detection and ranging (LiDAR) and unmanned aerial vehicles (UAVs) have been used to detect mangrove damage, monitor their recovery and analyze their 3D structural changes. Landsat 8 OLI (14%) has been particularly useful in long-term assessments, followed by Landsat 5 TM (9%) and NASA G-LiHT LiDAR (8%). Random forest (24%) and linear regression (24%) models were the most common modeling techniques, with the former being the most frequently used method for classifying satellite images. Some studies have shown significant mangrove canopy loss after major hurricanes, and damage was seen to vary spatially based on factors such as proximity to oceans, elevation and canopy structure, with taller mangroves typically experiencing greater damage. Recovery rates after hurricane-induced damage also vary, as some areas were seen to show rapid regrowth within months while others remained impacted after many years. The current challenges include capturing fine-scale changes owing to the dearth of remote sensing data with high temporal and spatial resolution. This review provides insights into the current remote sensing applications used in hurricane-prone mangrove habitats and is intended to guide future research directions, inform coastal management strategies and support conservation efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Automatic Correction of Time-Varying Orbit Errors for Single-Baseline Single-Polarization InSAR Data Based on Block Adjustment Model.
- Author
-
Hu, Huacan, Fu, Haiqiang, Zhu, Jianjun, Liu, Zhiwei, Wu, Kefu, Zeng, Dong, Wan, Afang, and Wang, Feng
- Subjects
- *
OPTICAL radar , *SYNTHETIC aperture radar , *ORBIT determination , *LIDAR , *STANDARD deviations - Abstract
Orbit error is one of the primary error sources of interferometric synthetic aperture radar (InSAR) and differential InSAR (D-InSAR) measurements, arising from inaccurate orbit determination of SAR platforms. Typically, orbit error in the interferogram can be estimated using polynomial models. However, correcting for orbit errors with significant time-varying characteristics presents two main challenges: (1) the complexity and variability of the azimuth time-varying orbit errors make it difficult to accurately model them using a set of polynomial coefficients; (2) existing patch-based polynomial models rely on empirical segmentation and overlook the time-varying characteristics, resulting in residual orbital error phase. To overcome these problems, this study proposes an automated block adjustment framework for estimating time-varying orbit errors, incorporating the following innovations: (1) the differential interferogram is divided into several blocks along the azimuth direction to model orbit error separately; (2) automated segmentation is achieved by extracting morphological features (i.e., peaks and troughs) from the azimuthal profile; (3) a block adjustment method combining control points and connection points is proposed to determine the model coefficients of each block for the orbital error phase estimation. The feasibility of the proposed method was verified by repeat-pass L-band spaceborne and P-band airborne InSAR data, and finally, the InSAR digital elevation model (DEM) was generated for performance evaluation. Compared with the high-precision light detection and ranging (LiDAR) elevation, the root mean square error (RMSE) of InSAR DEM was reduced from 18.27 m to 7.04 m in the spaceborne dataset and from 7.83~14.97 m to 3.36~6.02 m in the airborne dataset. Then, further analysis demonstrated that the proposed method outperforms existing algorithms under single-baseline and single-polarization conditions. Moreover, the proposed method is applicable to both spaceborne and airborne InSAR data, demonstrating strong versatility and potential for broader applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. HeLiPR: Heterogeneous LiDAR dataset for inter-LiDAR place recognition under spatiotemporal variations.
- Author
-
Jung, Minwoo, Yang, Wooseong, Lee, Dongjae, Gil, Hyeonjae, Kim, Giseop, and Kim, Ayoung
- Subjects
- *
OPTICAL radar , *LIDAR , *IMAGE sensors , *ROBOTICS , *VELOCITY - Abstract
Place recognition is crucial for robot localization and loop closure in simultaneous localization and mapping (SLAM). Light Detection and Ranging (LiDAR), known for its robust sensing capabilities and measurement consistency even in varying illumination conditions, has become pivotal in various fields, surpassing traditional imaging sensors in certain applications. Among various types of LiDAR, spinning LiDARs are widely used, while non-repetitive scanning patterns have recently been utilized in robotics applications. Some LiDARs provide additional measurements such as reflectivity, Near Infrared (NIR), and velocity from Frequency modulated continuous wave (FMCW) LiDARs. Despite these advances, there is a lack of comprehensive datasets reflecting the broad spectrum of LiDAR configurations for place recognition. To tackle this issue, our paper proposes the HeLiPR dataset, curated especially for place recognition with heterogeneous LiDARs, embodying spatiotemporal variations. To the best of our knowledge, the HeLiPR dataset is the first heterogeneous LiDAR dataset supporting inter-LiDAR place recognition with both non-repetitive and spinning LiDARs, accommodating different field of view (FOV)s and varying numbers of rays. The dataset covers diverse environments, from urban cityscapes to high-dynamic freeways, over a month, enhancing adaptability and robustness across scenarios. Notably, HeLiPR dataset includes trajectories parallel to MulRan sequences, making it valuable for research in heterogeneous LiDAR place recognition and long-term studies. The dataset is accessible at https://sites.google.com/view/heliprdataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. MUN-FRL: A Visual-Inertial-LiDAR Dataset for Aerial Autonomous Navigation and Mapping.
- Author
-
Thalagala, Ravindu G, De Silva, Oscar, Jayasiri, Awantha, Gubbels, Arthur, Mann, George KI, and Gosine, Raymond G
- Subjects
- *
GLOBAL Positioning System , *OPTICAL radar , *OBJECT recognition (Computer vision) , *LIDAR , *NAUTICAL charts - Abstract
This paper presents a unique outdoor aerial visual-inertial-LiDAR dataset captured using a multi-sensor payload to promote the global navigation satellite system (GNSS)-denied navigation research. The dataset features flight distances ranging from 300 m to 5 km, collected using a DJI-M600 hexacopter drone and the National Research Council (NRC) Bell412 Advanced Systems Research Aircraft (ASRA). The dataset consists of hardware-synchronized monocular images, inertial measurement unit (IMU) measurements, 3D light detection and ranging (LiDAR) point-clouds, and high-precision real-time kinematic (RTK)-GNSS based ground truth. Nine data sequences were collected as robot operating system (ROS) bags over 100 mins of outdoor environment footage ranging from urban areas, highways, airports, hillsides, prairies, and waterfronts. The dataset was collected to facilitate the development of visual-inertial-LiDAR odometry and mapping algorithms, visual-inertial navigation algorithms, object detection, segmentation, and landing zone detection algorithms based on real-world drone and full-scale helicopter data. All the data sequences contain raw sensor measurements, hardware timestamps, and spatio-temporally aligned ground truth. The intrinsic and extrinsic calibrations of the sensors are also provided, along with raw calibration datasets. A performance summary of state-of-the-art methods applied on the data sequences is also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Empirical uncertainty evaluation for the pose of a kinematic LiDAR-based multi-sensor system.
- Author
-
Ernst, Dominik, Vogel, Sören, Neumann, Ingo, and Alkhatib, Hamza
- Subjects
- *
OPTICAL radar , *LIDAR , *LASER measurement , *KALMAN filtering , *MULTISENSOR data fusion - Abstract
Kinematic multi-sensor systems (MSS) describe their movements through six-degree-of-freedom trajectories, which are often evaluated primarily for accuracy. However, understanding their self-reported uncertainty is crucial, especially when operating in diverse environments like urban, industrial, or natural settings. This is important, so the following algorithms can provide correct and safe decisions, i.e. for autonomous driving. In the context of localization, light detection and ranging sensors (LiDARs) are widely applied for tasks such as generating, updating, and integrating information from maps supporting other sensors to estimate trajectories. However, popular low-cost LiDARs deviate from other geodetic sensors in their uncertainty modeling. This paper therefore demonstrates the uncertainty evaluation of a LiDAR-based MSS localizing itself using an inertial measurement unit (IMU) and matching LiDAR observations to a known map. The necessary steps for accomplishing the sensor data fusion in a novel Error State Kalman filter (ESKF) will be presented considering the influences of the sensor uncertainties and their combination. The results provide new insights into the impact of random and systematic deviations resulting from parameters and their uncertainties established in prior calibrations. The evaluation is done using the Mahalanobis distance to consider the deviations of the trajectory from the ground truth weighted by the self-reported uncertainty, and to evaluate the consistency in hypothesis testing. The evaluation is performed using a real data set obtained from an MSS consisting of a tactical grade IMU and a Velodyne Puck in combination with reference data by a Laser Tracker in a laboratory environment. The data set consists of measurements for calibrations and multiple kinematic experiments. In the first step, the data set is simulated based on the Laser Tracker measurements to provide a baseline for the results under assumed perfect corrections. In comparison, the results using a more realistic simulated data set and the real IMU and LiDAR measurements provide deviations about a factor of five higher leading to an inconsistent estimation. The results offer insights into the open challenges related to the assumptions for integrating low-cost LiDARs in MSSs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Interactive transformer and CNN network for fusion classification of hyperspectral and LiDAR data.
- Author
-
Wang, Leiquan, Liu, Wenwen, Lyu, Dong, Zhang, Peiying, Guo, Fangming, Hu, Yabin, and Xu, Mingming
- Subjects
- *
OPTICAL radar , *LIDAR , *TRANSFORMER models , *CONVOLUTIONAL neural networks , *IMAGE recognition (Computer vision) - Abstract
The Transformer has become pivotal for the integrated analysis of multi-source remote-sensing (RS) data in Earth observation, particularly in applications such as the fusion classification of hyperspectral images (HSI) and Light Detection and Ranging (LiDAR) data. However, Transformers are often employed as effective feature extractors by adopting similar processing blocks for different modalities from multi-source sensors, overlooking differences in imaging principles and data characteristics. Moreover, in the feature extraction process across different sensor data, there is a lack of necessary cross-modal information interaction, leading to insufficient utilization of complementary information between different sensors and resulting in suboptimal fusion outcomes. In this paper, we propose an interactive Transformer and CNN network for the fusion classification of HSI and LiDAR data. Specifically, a heterogeneous three-branch network architecture is designed for HSI and LiDAR data, where Transformers and CNNs encapsulate global contextual spatial and spectral information for HSI and capture geometric elevation patterns for LiDAR data, respectively. Elevation-Spatial Interaction (ESI) and Spectral-Spatial Interaction (SSI) modules are then introduced for multi-stage feature interaction. ESI enables the CNN-Transformer network to focus on essential local elevation details while simultaneously modelling global contextual spatial information. SSI facilitates the Transformer-Transformer network to cyclically intertwine spectral and spatial information for long-range spectral-spatial feature fusion. Finally, the interacted elevation, spatial, and spectral features undergo the Gated Fusion module to achieve hierarchical fusion adaptively, resulting in an elevation-spatial-spectral representation. Experiments conducted on three benchmark HSI-LiDAR datasets demonstrate the effectiveness of our proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Classification of hyperspectral and LiDAR data by transformer-based enhancement.
- Author
-
Pan, Jiechen, Shuai, Xing, Xu, Qing, Dai, Mofan, Zhang, Guoping, and Wang, Guo
- Subjects
- *
OPTICAL radar , *LIDAR , *MULTISENSOR data fusion , *REMOTE sensing , *TRANSFORMER models , *DEEP learning - Abstract
The integration of multi-modal data allows for a more accurate representation of the ground characteristics. For a comprehensive interpretation of remote sensing data, existing multi-modal data fusion research mainly focuses on the joint utilization of 3D Light Detection and Ranging (LiDAR) and 2D Hyperspectral Image (HSI) data. However, existing algorithms do not pay much attention to the interaction of high-level semantic information between different modal data before fusion. This paper proposes a novel multi-modal data fusion deep learning network with the Cross-Modal Self-Attentive Feature Fusion Transformer (SAFFT). The framework employs a multi-head self-attention layer to fuse various attention information from multiple heads, effectively enhancing advanced feature information from different modalities for comprehensive integration. Experimental results on the Houston 2013 dataset demonstrate the effectiveness of the proposed method, which achieves an overall accuracy (OA) of 94.3757% in classifying 15 semantic classes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Multi-scale analysis and paleoseismic investigations along the Geumwang Fault: an example of integrated approach in paleoseismology in slow tectonic region.
- Author
-
Kim, Chang-Min, Lee, Tae-Ho, Choi, Jin-Hyuck, Lee, Hoil, and Kim, Dong-Eun
- Subjects
- *
OPTICALLY stimulated luminescence , *OPTICAL radar , *LIDAR , *FAULT zones , *FLUID injection , *PALEOSEISMOLOGY - Abstract
Paleoseismological research for a slowly deforming intraplate fault can provide essential information for understanding not only the spatiotemporal characteristics of past earthquakes but also seismic behavior in the case of long recurrence intervals. To reveal the paleoseismological properties and faulting processes of the intraplate fault, the Geumwang Fault Zone in the central Korean Peninsula, we conducted comprehensive paleoseismological investigations along the fault zone, incorporating geomorphological mapping with airborne light detection and ranging (LiDAR), electrical resistivity tomography (ERT), borehole drilling, trench excavation, optically stimulated luminescence (OSL) dating, and microstructural analysis. Along NE-SW-striking lineaments of the Geumwang Fault Zone, surface deformation is weakly recognized by LiDAR imagery in a damage zone along the northern section of the fault zone (Suha site). Results of ERT and borehole logging at the Suha site suggest a localized zone of low resistivity and unconformity level separation in sedimentary layers, respectively. A trench section excavated along the ERT traverse and borehole sites exposes a fault contact between granite and unconsolidated Quaternary strata comprising boulders (47 ± 3 ka), clayey sand (24 ± 2 ka), pebbly cobbles, coarse sand, and artificial layers from bottom to top. The < 5-cm-wide slip zone is oriented N09°E/85°NW and cuts the granite to the west and the boulder layer to the east. This slip zone that covered by the clayey sand stratum records an apparent vertical offset of ∼1.5 m and has sub-horizontal striations indicating dextral movement. Microstructures at the contact between the granite and the boulder layer support the occurrence of seismic slip propagation along the contact and include injected sedimentary materials, clay-clast aggregates, and fresh, open fractures in quartz and feldspar grains in the boulder layer. The slip zone consists of a < 4.5-cm-wide zone of cataclasite and a < 5-mm-wide principal slip zone (PSZ). Microstructures in the slip zone and sediments near the slip zone include seismic-slip indicators of pressurized gouge materials and fluid injection within PSZ, and deformed sediments. These reveal that the slip zone underwent repeated seismic slip events during uplift to the surface. Our paleoseismological analyses with microstructures show that the boulder layer was cut by strike-slip faulting with a minor vertical component between 47–24 ka, following which the overlying sediments were deposited along the exposed fault scarp as incision fill. The results show that microstructural observations can provide key information on the deformation of unconsolidated sediments and on the nature and timing of seismic faulting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. New evidence of late Quaternary earthquake surface rupturing along the Gongju Fault, central Korea.
- Author
-
Kim, Dong-Eun, Kim, Chang-Min, Choi, Han-Woo, and Lee, Hoil
- Subjects
- *
OPTICALLY stimulated luminescence , *OPTICAL radar , *LIDAR , *DEFORMATION of surfaces , *LANDFORMS , *SURFACE fault ruptures , *PALEOSEISMOLOGY - Abstract
Advanced technologies such as light detection and ranging (LiDAR) and unmanned aerial vehicles (UAVs) have revolutionized the detection of subtle surface deformation and the generation of high-resolution digital elevation models, overcoming the challenges posed by low tectonic activity and climatic surface erosion on fault-generated landscapes. This study presents a new record of paleoearthquake surface rupture along a section of the central Gongju Fault, transecting the central part of the Korean Peninsula, by analyzing geomorphic, stratigraphic, and structural features. We identified a NE-SW-striking, prominent fault-generated landform derived from LiDAR analysis and surface ruptures showing a vertical offset of < 15 cm by trench excavation. We also constrained the depositional ages to ∼94 ka using optically stimulated luminescence (OSL). Our comprehensive findings suggest that the seismic activity along the main trace of the Gongju Fault resulted in a distributed deformation within the fault zone, likely from multiple seismic events rather than a single occurrence. Structural features of the surface ruptures exposed on the trench wall show systematic changes in slip zone geometry, influenced by the pre-existing fault geometry, the imposed regional tectonic stress, and the physical properties of unconsolidated materials. This study enhances our understanding of seismic activity and fault dynamics in the central part of the Korean Peninsula, highlighting the significant influence of geological and climatic factors over tens of thousands of years in the intraplate regions with similar tectonic and climate settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Minimal configuration point cloud odometry and mapping.
- Author
-
Bhandari, Vedant, Phillips, Tyson Govan, and McAree, Peter Ross
- Subjects
- *
OPTICAL radar , *LIDAR , *POINT cloud , *FEATURE extraction , *PROBLEM solving - Abstract
Simultaneous Localization and Mapping (SLAM) refers to the common requirement for autonomous platforms to estimate their pose and map their surroundings. There are many robust and real-time methods available for solving the SLAM problem. Most are divided into a front-end, which performs incremental pose estimation, and a back-end, which smooths and corrects the results. A low-drift front-end odometry solution is needed for robust and accurate back-end performance. Front-end methods employ various techniques, such as point cloud-to-point cloud (PC2PC) registration, key feature extraction and matching, and deep learning-based approaches. The front-end algorithms have become increasingly complex in the search for low-drift solutions and many now have large configuration parameter sets. It is desirable that the front-end algorithm should be inherently robust so that it does not need to be tuned by several, perhaps many, configuration parameters to achieve low drift in various environments. To address this issue, we propose Simple Mapping and Localization Estimation (SiMpLE), a front-end LiDAR-only odometry method that requires five low-sensitivity configurable parameters. SiMpLE is a scan-to-map point cloud registration algorithm that is straightforward to understand, configure, and implement. We evaluate SiMpLE using the KITTI, MulRan, UrbanNav, and a dataset created at the University of Queensland. SiMpLE performs among the top-ranked algorithms in the KITTI dataset and outperformed all prominent open-source approaches in the MulRan dataset whilst having the smallest configuration set. The UQ dataset also demonstrated accurate odometry with low-density point clouds using Velodyne VLP-16 and Livox Horizon LiDARs. SiMpLE is a front-end odometry solution that can be integrated with other sensing modalities and pose graph-based back-end methods for increased accuracy and long-term mapping. The lightweight and portable code for SiMpLE is available at: https://github.com/vb44/SiMpLE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Reconstruction of Coal Mining Subsidence Field by Fusion of SAR and UAV LiDAR Deformation Data.
- Author
-
Yang, Bin, Du, Weibing, Zou, Youfeng, Zhang, Hebing, Chai, Huabin, Wang, Wei, Song, Xiangyang, and Zhang, Wenzhi
- Subjects
- *
MINE subsidences , *OPTICAL radar , *LIDAR , *SYNTHETIC aperture radar , *COAL mining - Abstract
The geological environment damage caused by coal mining subsidence has become an important factor affecting the sustainable development of mining areas. Reconstruction of the Coal Mining Subsidence Field (CMSF) is the key to preventing geological disasters, and the needs of CMSF reconstruction cannot be met by solely relying on a single remote sensing technology. The combination of Unmanned Aerial Vehicle (UAV) and Synthetic Aperture Radar (SAR) has complementary advantages; however, the data fusion strategy by refining the SAR deformation field through UAV still needs to be updated constantly. This paper proposed a Prior Weighting (PW) method based on Satellite Aerial (SA) heterogeneous remote sensing. The method can be used to fuse SAR and UAV Light Detection and Ranging (LiDAR) data for ground subsidence parameter inversion. Firstly, the subsidence boundary of Differential Interferometric SAR (DInSAR) combined with the large gradient subsidence of Pixel Offset Tracking (POT) was developed to initialize the SAR preliminary CMSF. Secondly, the SAR preliminary CMSF was refined by UAV LiDAR data; the weights of SAR and UAV LiDAR data are 0.4 and 0.6 iteratively. After the data fusion, the subsidence field was reconstructed. The results showed that the overall CMSF accuracy improved from ±144 mm to ±51 mm. The relative errors of the surface subsidence factor and main influence angle tangent calculated by the physical model and in situ measured data are 1.3% and 1.7%. It shows that the proposed SAR/UAV fusion method has significant advantages in the reconstruction of CMSF, and the PW method contributes to the prevention and control of mining subsidence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Recent Progresses on Hybrid Lithium Niobate External Cavity Semiconductor Lasers.
- Author
-
Wang, Min, Fang, Zhiwei, Zhang, Haisu, Lin, Jintian, Zhou, Junxia, Huang, Ting, Zhu, Yiran, Li, Chuntao, Yu, Shupeng, Fu, Botao, Qiao, Lingling, and Cheng, Ya
- Subjects
- *
OPTICAL feedback , *OPTICAL radar , *TUNABLE lasers , *LIDAR , *OPTICAL resonators - Abstract
Thin film lithium niobate (TFLN) has become a promising material platform for large scale photonic integrated circuits (PICs). As an indispensable component in PICs, on-chip electrically tunable narrow-linewidth lasers have attracted widespread attention in recent years due to their significant applications in high-speed optical communication, coherent detection, precision metrology, laser cooling, coherent transmission systems, light detection and ranging (LiDAR). However, research on electrically driven, high-power, and narrow-linewidth laser sources on TFLN platforms is still in its infancy. This review summarizes the recent progress on the narrow-linewidth compact laser sources boosted by hybrid TFLN/III-V semiconductor integration techniques, which will offer an alternative solution for on-chip high performance lasers for the future TFLN PIC industry and cutting-edge sciences. The review begins with a brief introduction of the current status of compact external cavity semiconductor lasers (ECSLs) and recently developed TFLN photonics. The following section presents various ECSLs based on TFLN photonic chips with different photonic structures to construct external cavity for on-chip optical feedback. Some conclusions and future perspectives are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Active Region Mode Control for High-Power, Low-Linewidth Broadened Semiconductor Optical Amplifiers for Light Detection and Ranging.
- Author
-
Tang, Hui, Zhang, Meng, Liang, Lei, Zhang, Tianyi, Qin, Li, Song, Yue, Lei, Yuxin, Jia, Peng, Wang, Yubing, Qiu, Cheng, Zheng, Chuantao, Li, Xin, Chen, Yongyi, Li, Dan, Ning, Yongqiang, and Wang, Lijun
- Subjects
- *
OPTICAL radar , *LIDAR , *LASERS , *BANDWIDTHS , *WAVELENGTHS - Abstract
This paper introduces a semiconductor optical amplifier (SOA) with high power and narrow linewidth broadening achieved through active region mode control. By integrating mode control with broad-spectrum epitaxial material design, the device achieves high gain, high power, and wide band output. At a wavelength of 1550 nm and an ambient temperature of 20 °C, the output power reaches 757 mW when the input power is 25 mW, and the gain is 21.92 dB when the input power is 4 mW. The 3 dB gain bandwidth is 88 nm, and the linewidth expansion of the input laser after amplification through the SOA is only 1.031 times. The device strikes a balance between high gain and high power, offering a new amplifier option for long-range light detection and ranging (LiDAR). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning.
- Author
-
Ren, Fujie, Liu, Haibin, and Wang, Huanjie
- Subjects
- *
OPTICAL radar , *LIDAR , *GEOGRAPHICAL perception , *FEATURE extraction , *POSITION sensors , *DEEP learning - Abstract
Multisensor (MS) data fusion is important for improving the stability of vehicle environmental perception systems. MS joint calibration is a prerequisite for the fusion of multimodality sensors. Traditional calibration methods based on calibration boards require the manual extraction of many features and manual registration, resulting in a cumbersome calibration process and significant errors. A joint calibration algorithm for a Light Laser Detection and Ranging (LiDAR) and camera is proposed based on deep learning without the need for other special calibration objects. A network model constructed based on deep learning can automatically capture object features in the environment and complete the calibration by matching and calculating object features. A mathematical model was constructed for joint LiDAR-camera calibration, and the process of sensor joint calibration was analyzed in detail. By constructing a deep-learning-based network model to determine the parameters of the rotation matrix and translation matrix, the relative spatial positions of the two sensors were determined to complete the joint calibration. The network model consists of three parts: a feature extraction module, a feature-matching module, and a feature aggregation module. The feature extraction module extracts the image features of color and depth images, the feature-matching module calculates the correlation between the two, and the feature aggregation module determines the calibration matrix parameters. The proposed algorithm was validated and tested on the KITTI-odometry dataset and compared with other advanced algorithms. The experimental results show that the average translation error of the calibration algorithm is 0.26 cm, and the average rotation error is 0.02°. The calibration error is lower than those of other advanced algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Research on the Depth Image Reconstruction Algorithm Using the Two-Dimensional Kaniadakis Entropy Threshold.
- Author
-
Yang, Xianhui, Sun, Jianfeng, Ma, Le, Zhou, Xin, Lu, Wei, and Li, Sining
- Subjects
- *
IMAGE reconstruction algorithms , *OPTICAL radar , *LIDAR , *AVALANCHE diodes , *THREE-dimensional imaging , *IMAGE denoising - Abstract
The photon-counting light laser detection and ranging (LiDAR), especially the Geiger mode avalanche photon diode (Gm-APD) LiDAR, can obtain three-dimensional images of the scene, with the characteristics of single-photon sensitivity, but the background noise limits the imaging quality of the laser radar. In order to solve this problem, a depth image estimation method based on a two-dimensional (2D) Kaniadakis entropy thresholding method is proposed which transforms a weak signal extraction problem into a denoising problem for point cloud data. The characteristics of signal peak aggregation in the data and the spatio-temporal correlation features between target image elements in the point cloud-intensity data are exploited. Through adequate simulations and outdoor target-imaging experiments under different signal-to-background ratios (SBRs), the effectiveness of the method under low signal-to-background ratio conditions is demonstrated. When the SBR is 0.025, the proposed method reaches a target recovery rate of 91.7%, which is better than the existing typical methods, such as the Peak-picking method, Cross-Correlation method, and the sparse Poisson intensity reconstruction algorithm (SPIRAL), which achieve a target recovery rate of 15.7%, 7.0%, and 18.4%, respectively. Additionally, comparing with the SPIRAL, the reconstruction recovery ratio is improved by 73.3%. The proposed method greatly improves the integrity of the target under high-background-noise environments and finally provides a basis for feature extraction and target recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Tackling the Thorny Dilemma of Mapping Southeastern Sicily's Coastal Archaeology Beneath Dense Mediterranean Vegetation: A Drone‐Based LiDAR Approach.
- Author
-
Calderone, Dario, Lercari, Nicola, Tanasi, Davide, Busch, Dennis, Hom, Ryan, and Lanteri, Rosa
- Subjects
- *
OPTICAL radar , *LIDAR , *OPTICAL scanners , *COASTAL archaeology , *AIRBORNE lasers , *LANDSCAPE archaeology - Abstract
ABSTRACT Airborne laser scanning (ALS), commonly known as Light Detection and Ranging (LiDAR), is a remote sensing technique that enables transformative archaeological research by providing high‐density 3D representations of landscapes and sites covered by vegetation whose analysis reveals hidden features and structures. ALS can detect targets under trees and grasslands, making it an ideal archaeological survey and mapping tool. ALS instruments are usually mounted on piloted aircraft. However, since the mid‐2010s, smaller laser scanners can be mounted on uncrewed aerial vehicles or drones. In this article, we examined the viability of drone‐based ALS for archaeological applications by utilizing a RIEGL VUX‐UAV22 sensor to capture point clouds with high spatial resolution at the archaeological site of Heloros in Southeastern Sicily, founded by the Greeks in the late eighth century bce. Using this laser scanner, we surveyed over 1.6 km2 of the archaeological landscape, producing datasets that outperformed noncommercial airborne ALS data for the region made available by the Italian government. We produced derivative imagery free of vegetation, which we visualized in GIS using a modified Local Relief Model technique to aid our archaeological analyses. Our findings demonstrate that drone‐based ALS can penetrate the dense Mediterranean canopy of coastal Sicily with sufficient point density to enable more efficient mapping of underlying archaeological features such as stone quarries, cart tracks, defensive towers and fortification walls. Our study proved that drone‐based ALS sensors can be easily transported to remote locations and that in‐house lab staff can safely operate them, which enables multiple on‐demand surveys and opportunistic collections to be conducted on the fly when environmental conditions are ideal. We conclude that these capabilities further increase the benefits of utilizing ALS for surveying the archaeological landscape under the Mediterranean canopy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Large‐Range Beam Steering through Dynamic Manipulation of Topological Charges.
- Author
-
Zhang, Kong, Zhang, Guanjie, Chen, Xinghong, Sang, Yungang, and Mao, Yifei
- Subjects
- *
OPTICAL radar , *BEAM steering , *LIDAR , *TUNABLE lasers , *PHASE change materials - Abstract
Dynamic tuning of light properties by external stimuli is at the core of various applications such as electro‐optical modulators, beam steering, and spatial light modulators. The conventional mechanism involves fine‐tuning the eigenmode of an optical system through adjusting the effective refractive index. However, the weak nonlinearity results in low modulation efficiency, leading to devices with poor performance and large size. Polarization topological charge is a significant concept that has facilitated the development of innovative optical devices like low‐threshold lasers and vortex generators. But the devices reported so far are static in nature. Here, a method for dynamically controlling light by actively manipulating the evolution of topological charges in momentum space is first presented. By switching between integer and half‐integer states of topological charges, the device's radiation properties undergo a significant transformation. The beam direction can be tuned up to 160°, which, to the best of the authors' knowledge, is the largest tuning angle among similar beam steering devices. Furthermore, the device demonstrates high radiation efficiency while maintaining a compact device size. This light controlling method can be applied in various fields, including optical communication, tunable lasers, and light detection and ranging. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Sensitive Filter‐Free Narrowband Infrared Photodetectors for Weak Light Detection and Ranging.
- Author
-
He, Yin, Han, Zeyao, Li, Junyu, Gu, Yu, Zou, Yousheng, Wang, Yao, Yang, Jialin, Lei, Wei, and Xu, Xiaobao
- Subjects
- *
LIDAR , *OPTICAL radar , *REMOTE sensing , *LIGHT filters , *DETECTORS , *PHOTODETECTORS - Abstract
Light detection and ranging (Lidar), which utilizes scattered near‐infrared (NIR) light to identify objects, has shown great potential in remote sensing, autonomous driving, robotic vision, etc. However, the intensity of the scattered NIR light is significantly reduced, placing stringent demands on the selectivity and fast response of the sensing. Even though optical filters can be used to select weak signals, the photon losses arising from additional interfaces are inevitable, thus reducing the signal‐to‐noise ratio (SNR). In this work, a series of organic narrowband photodetectors are constructed with tunable and selective responses from 700 to 950 nm with a minimum full‐width‐at‐half‐maximum (FWHM) of ≈30 nm. To enhance response sensitivity and speed, delicate and compact photodiode architecture is applied through comprehensive device engineering. As a result, the detectors possess a sensitive response to weak NIR light with a detection limit of 3 nW, and response rise time (τr) and decay time (τd) of ≈24 ns and ≈1.3 µs, respectively. Moreover, a proof‐of‐concept Lidar based on these organic photodetectors is demonstrated with a distance resolution of ≈10 cm, which further confirms the great potential in practical application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. The coordinated impact of forest internal structural complexity and tree species diversity on forest productivity across forest biomes.
- Author
-
Qin Ma, Yanjun Su, Tianyu Hu, Lin Jiang, Xiangcheng Mi, Luxiang Lin, Min Cao, Xugao Wang, Zhenhua Sun, Jin Wu, Keping Ma, and Qinghua Guo
- Subjects
- *
CARBON sequestration in forests , *OPTICAL radar , *LIDAR , *STRUCTURAL frame models , *FOREST biodiversity , *FOREST productivity - Abstract
Forest structural complexity can mediate the light and water distribution within forest canopies, and has a direct impact on forest biodiversity and carbon storage capability. It is believed that increases in forest structural complexity can enhance tree species diversity and forest productivity, but inconsistent relationships among them have been reported. Here, we quantified forest structural complexity in three aspects (i.e., horizontal, vertical, and internal structural complexity) from unmanned aerial vehicle light detection and ranging data, and investigated their correlations with tree species diversity and forest productivity by incorporating field measurements in three forest biomes with large latitude gradients in China. Our results show that internal structural complexity had a stronger correlation (correlation coefficient = 0.85) with tree species richness than horizontal structural complexity (correlation coefficient = -0.16) and vertical structural complexity (correlation coefficient = 0.61), and it was the only forest structural complexity attribute having significant correlations with both tree species richness and tree species evenness. A strong scale effect was observed in the correlations among forest structural complexity, tree species diversity, and forest productivity. Moreover, forest internal structural complexity had a tight positive coordinated contribution with tree species diversity to forest productivity through structure equation model analysis, while horizontal and vertical structural complexity attributes have insignificant or weaker coordinated effects than internal structural complexity, which indicated that the neglect of forest internal structural complexity might partially lead to the current inconsistent observations among forest structural complexity, tree species diversity, and forest productivity. The results of this study can provide a new angle to understand the observed inconsistent correlations among forest structural complexity, tree species diversity, and forest productivity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Comparability of radar and optical methods in identifying surface water in a semi‐arid protected area.
- Author
-
Dzinotizei, Zorodzai, Ndagurwa, Hilton G. T., Ndaimani, Henry, and Chichinye, Angella
- Subjects
- *
OPTICAL radar , *SPECTRAL reflectance , *BODIES of water , *REMOTE sensing , *OPTICAL images - Abstract
Surface water assumes a pivotal role in sustaining a wide range of wildlife species in semi‐arid protected areas. However, differences in surface water body typology, underlying soil type, wildlife activity, the presence of phytoplankton amongst other factors, result in high variability of surface water spectral reflectance and detection accuracy. In this study, the performance of radar and optical methods was evaluated in detecting surface water of variable spectral reflectance in Hwange National Park, Zimbabwe using Sentinel‐1 radar and Sentinel‐2 optical images for the period 2016–2023. Results demonstrated that radar methods had low surface water detection accuracy which was highly variable as shown by overall accuracy and kappa statistic measures which continuously changed over time compared with optical methods. The overall best‐performing method was the optical AWEInsh (sharpened) which showed high surface water detection accuracy and consistency (OA: 94%–100%) and (κ: 0.88–1.00) from 2016 to 2023. Therefore, optical methods present a stable and robust way for surface water monitoring in heterogeneous semi‐arid protected areas. However, radar‐based methods should be continually explored where optical‐based technologies are impeded as a result of vegetation cover and cloud conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Characterisation of the Atmosphere in Very High Energy Gamma-Astronomy for Imaging Atmospheric Cherenkov Telescopes.
- Author
-
Dominis Prester, Dijana, Ebr, Jan, Gaug, Markus, Hahn, Alexander, Babić, Ana, Eliášek, Jiří, Janeček, Petr, Karpov, Sergey, Kolarek, Marta, Manganaro, Marina, and Mirzoyan, Razmik
- Subjects
- *
CHERENKOV radiation , *OPTICAL radar , *LIDAR , *LIGHT transmission , *GAMMA rays - Abstract
Ground-based observations of Very High Energy (VHE) gamma rays from extreme astrophysical sources are significantly influenced by atmospheric conditions. This is due to the atmosphere being an integral part of the detector when utilizing Imaging Atmospheric Cherenkov Telescopes (IACTs). Clouds and dust particles diminish atmospheric transmission of Cherenkov light, thereby impacting the reconstruction of the air showers and consequently the reconstructed gamma-ray spectra. Precise measurements of atmospheric transmission above Cherenkov observatories play a pivotal role in the accuracy of the analysed data, among which the corrections of the reconstructed energies and fluxes of incoming gamma rays, and in establishing observation strategies for different types of gamma-ray emitting sources. The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes and the Cherenkov Telescope Array Observatory (CTAO), both located on the Observatorio del Roque de los Muchachos (ORM), La Palma, Canary Islands, use different sets of auxiliary instruments for real-time characterisation of the atmosphere. In this paper, historical data taken by MAGIC LIDAR (LIght Detection And Ranging) and CTAO FRAM (F/Photometric Robotic Telescope) are presented. From the atmospheric aerosol transmission profiles measured by the MAGIC LIDAR and CTAO FRAM aerosol optical depth maps, we obtain the characterisation of the clouds above the ORM at La Palma needed for data correction and optimal observation scheduling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.