286 results on '"lidar sensor"'
Search Results
2. Deep Unrolled Weighted Graph Laplacian Regularization for Depth Completion.
- Author
-
Zeng, Jin, Zhu, Qingpeng, Tian, Tongxuan, Sun, Wenxiu, Zhang, Lin, and Zhao, Shengjie
- Subjects
- *
ARTIFICIAL neural networks , *DEPTH maps (Digital image processing) , *WEIGHTED graphs , *BATHYMETRY , *LIDAR - Abstract
Depth completion aims to estimate dense depth images from sparse depth measurements with RGB image guidance. However, previous approaches have not fully considered sparse input fidelity, resulting in inconsistency with sparse input and poor robustness to input corruption. In this paper, we propose the deep unrolled Weighted Graph Laplacian Regularization (WGLR) for depth completion which enhances input fidelity and noise robustness by enforcing input constraints in the network design. Specifically, we assume graph Laplacian regularization as the prior for depth completion optimization and derive the WGLR solution by interpreting the depth map as the discrete counterpart of continuous manifold, enabling analysis in continuous domain and enforcing input consistency. Based on its anisotropic diffusion interpretation, we unroll the WGLR solution into iterative filtering for efficient implementation. Furthermore, we integrate the unrolled WGLR into deep learning framework to develop high-performance yet interpretable network, which diffuses the depth in a hierarchical manner to ensure global smoothness while preserving visually salient details. Experimental results demonstrate that the proposed scheme improves consistency with depth measurements and robustness to input corruption for depth completion, outperforming competing schemes on the NYUv2, KITTI-DC and TetrasRGBD datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. Geometric Feature Characterization of Apple Trees from 3D LiDAR Point Cloud Data.
- Author
-
Karim, Md Rejaul, Ahmed, Shahriar, Reza, Md Nasim, Lee, Kyu-Ho, Sung, Joonjea, and Chung, Sun-Ok
- Subjects
OPTICAL radar ,OBJECT recognition (Computer vision) ,LIDAR ,STANDARD deviations ,ORCHARD management ,PYTHON programming language - Abstract
The geometric feature characterization of fruit trees plays a role in effective management in orchards. LiDAR (light detection and ranging) technology for object detection enables the rapid and precise evaluation of geometric features. This study aimed to quantify the height, canopy volume, tree spacing, and row spacing in an apple orchard using a three-dimensional (3D) LiDAR sensor. A LiDAR sensor was used to collect 3D point cloud data from the apple orchard. Six samples of apple trees, representing a variety of shapes and sizes, were selected for data collection and validation. Commercial software and the python programming language were utilized to process the collected data. The data processing steps involved data conversion, radius outlier removal, voxel grid downsampling, denoising through filtering and erroneous points, segmentation of the region of interest (ROI), clustering using the density-based spatial clustering (DBSCAN) algorithm, data transformation, and the removal of ground points. Accuracy was assessed by comparing the estimated outputs from the point cloud with the corresponding measured values. The sensor-estimated and measured tree heights were 3.05 ± 0.34 m and 3.13 ± 0.33 m, respectively, with a mean absolute error (MAE) of 0.08 m, a root mean squared error (RMSE) of 0.09 m, a linear coefficient of determination (r
2 ) of 0.98, a confidence interval (CI) of −0.14 to −0.02 m, and a high concordance correlation coefficient (CCC) of 0.96, indicating strong agreement and high accuracy. The sensor-estimated and measured canopy volumes were 13.76 ± 2.46 m3 and 14.09 ± 2.10 m3 , respectively, with an MAE of 0.57 m3 , an RMSE of 0.61 m3 , an r2 value of 0.97, and a CI of −0.92 to 0.26, demonstrating high precision. For tree and row spacing, the sensor-estimated distances and measured distances were 3.04 ± 0.17 and 3.18 ± 0.24 m, and 3.35 ± 0.08 and 3.40 ± 0.05 m, respectively, with RMSE and r2 values of 0.12 m and 0.92 for tree spacing, and 0.07 m and 0.94 for row spacing, respectively. The MAE and CI values were 0.09 m, 0.05 m, and −0.18 for tree spacing and 0.01, −0.1, and 0.002 for row spacing, respectively. Although minor differences were observed, the sensor estimates were efficient, though specific measurements require further refinement. The results are based on a limited dataset of six measured values, providing initial insights into geometric feature characterization performance. However, a larger dataset would offer a more reliable accuracy assessment. The small sample size (six apple trees) limits the generalizability of the findings and necessitates caution in interpreting the results. Future studies should incorporate a broader and more diverse dataset to validate and refine the characterization, enhancing management practices in apple orchards. [ABSTRACT FROM AUTHOR]- Published
- 2025
- Full Text
- View/download PDF
4. Study on the Near-Distance Object-Following Performance of a 4WD Crop Transport Robot: Application of 2D LiDAR and Particle Filter.
- Author
-
Pak, Eun-Seong, Kim, Byeong-Hun, Lee, Kil-Soo, Cha, Yong-Chul, and Kim, Hwa-Young
- Subjects
ROBOT design & construction ,LIDAR ,AGRICULTURE ,SYSTEMS design ,DETECTORS - Abstract
In this paper, the development and performance evaluation of a 4WD robot system designed to follow near-distance moving objects using a 2D LiDAR sensor are presented. The study incorporates identifier (ID) classification and a distance-based dynamic angle of perception model to enhance the tracking capabilities of the 2D LiDAR sensor. A particle filter algorithm was utilized to verify the accuracy of object tracking. Furthermore, a proportional–derivative (PD) controller was designed and implemented to ensure the stability of the robot during operation. The experimental results demonstrate the potential applicability of these approaches in various industrial applications. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Application of LiDAR Sensors for Crop and Working Environment Recognition in Agriculture: A Review.
- Author
-
Karim, Md Rejaul, Reza, Md Nasim, Jin, Hongbin, Haque, Md Asrakul, Lee, Kyu-Ho, Sung, Joonjea, and Chung, Sun-Ok
- Subjects
- *
AGRICULTURAL technology , *OBJECT recognition (Computer vision) , *PLANT identification , *CROPS , *AGRICULTURAL equipment - Abstract
LiDAR sensors have great potential for enabling crop recognition (e.g., plant height, canopy area, plant spacing, and intra-row spacing measurements) and the recognition of agricultural working environments (e.g., field boundaries, ridges, and obstacles) using agricultural field machinery. The objective of this study was to review the use of LiDAR sensors in the agricultural field for the recognition of crops and agricultural working environments. This study also highlights LiDAR sensor testing procedures, focusing on critical parameters, industry standards, and accuracy benchmarks; it evaluates the specifications of various commercially available LiDAR sensors with applications for plant feature characterization and highlights the importance of mounting LiDAR technology on agricultural machinery for effective recognition of crops and working environments. Different studies have shown promising results of crop feature characterization using an airborne LiDAR, such as coefficient of determination (R2) and root-mean-square error (RMSE) values of 0.97 and 0.05 m for wheat, 0.88 and 5.2 cm for sugar beet, and 0.50 and 12 cm for potato plant height estimation, respectively. A relative error of 11.83% was observed between sensor and manual measurements, with the highest distribution correlation at 0.675 and an average relative error of 5.14% during soybean canopy estimation using LiDAR. An object detection accuracy of 100% was found for plant identification using three LiDAR scanning methods: center of the cluster, lowest point, and stem–ground intersection. LiDAR was also shown to effectively detect ridges, field boundaries, and obstacles, which is necessary for precision agriculture and autonomous agricultural machinery navigation. Future directions for LiDAR applications in agriculture emphasize the need for continuous advancements in sensor technology, along with the integration of complementary systems and algorithms, such as machine learning, to improve performance and accuracy in agricultural field applications. A strategic framework for implementing LiDAR technology in agriculture includes recommendations for precise testing, solutions for current limitations, and guidance on integrating LiDAR with other technologies to enhance digital agriculture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A LiDAR-Based Backfill Monitoring System.
- Author
-
Xu, Xingliang, Huang, Pengli, He, Zhengxiang, Zhao, Ziyu, and Bi, Lin
- Subjects
MINES & mineral resources ,POINT cloud ,SURFACE reconstruction ,AEROSPACE planes ,TUNNELS - Abstract
A backfill system in underground mines supports the walls and roofs of mined-out areas and improves the structural integrity of mines. However, there has been a significant gap in the visualization and monitoring of the backfill progress. To better observe the process of the paste backfill material filling the tunnels, a LiDAR-based backfill monitoring system is proposed. As long as the rising top surface of the backfill material enters the LiDAR range, the proposed system can compute the plane coefficient of this surface. The intersection boundary of the tunnel and the backfill material can be obtained by substituting the plane coefficient into the space where the initial tunnel is located. A surface point generation and slurry point determination algorithm are proposed to obtain the point cloud of the backfill body based on the intersection boundary. After Poisson surface reconstruction and volume computation, the point cloud model is reconstructed into a 3D mesh, and the backfill progress is digitized as the ratio of the backfill body volume to the initial tunnel volume. The volumes of the meshes are compared with the results computed by two other algorithms; the error is less than 1%. The time to compute a set of data increases with the amount of data, ranging from 8 to 20 s, which is sufficient to update a set of data with a tiny increase in progress. As the digitized results update, the visualization progress is transmitted to the mining control center, allowing unexpected problems inside the tunnel to be monitored and addressed based on the messages provided by the proposed system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Detecting and tracking a road-drivable area with three-dimensional point clouds and IoT for autonomous applications.
- Author
-
Enad, Mahmood H., Bashi, Omar I. Dallal, Jameel, Shymaa Mohammed, Alhasoon, Asaad A., Al Kubaisi, Yasir Mahmood, and Hameed, Husamuldeen K.
- Abstract
This work presents a Light Detection and Ranging (LiDAR)-based point cloud method for detecting and tracking road edges. Initially, this work explores the progress in detecting road curb issues. A dataset (called PandaSet) with a Pandar64 sensor to capture different city scenes is used. LiDAR point cloud, as part of an IoT ecosystem, detects the road curb and requires distinguishing the right and left road curbs with regard to the ego car. The curb point's features use Random Sample Consensus (RANSAC)-based polynomial quadratic approximation to obtain the prospect curb points to eliminate false positive ones. Through extensive experiments, we demonstrate the effectiveness and reliability of our method under various traffic and environmental conditions. Our results showcase a maximum drift of 1.62 m for left curb points and 0.87 m for right curb points, highlighting the superior accuracy and stability of our approach. This LiDAR-based curb detection framework paves the way for enhanced lane recognition and path planning in autonomous driving applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. LẬP BẢN ĐỒ VÀ SO SÁNH CHẤT LƯỢNG BẢN ĐỒ 2D VÀ 3D ĐƯỢC TẠO RA TỪ ROBOT DI ĐỘNG.
- Author
-
Lưu Trọng Hiếu and Nguyễn Hữu Cường
- Abstract
The study introduces a method for mapping and compares the quality of 2D and 3D maps created by mobile robots indoor environment. A mobile robot integrating an embedded computer, lidar sensors, and an RGBD camera was developed in this study. The SLAM process is conducted through ROS2 which helps the controller observe the robot's working status. The experiment was conducted on two separate maps with different properties including hidden corners and open spaces. The results showed that, there were differences in both experiments. The result shows that creating maps using lidar is highly accurate with an accuracy rate of 97%. In the case of 3D maps, the study shows that, there is a large difference when the robot moves in a straight line and a circular motion with an accuracy rate of 89% and 64%, respectively. The results of this study support more effective use of sensors for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
9. Driving Assistance System with Obstacle Avoidance for Electric Wheelchairs.
- Author
-
Erturk, Esranur, Kim, Soonkyum, and Lee, Dongyoung
- Subjects
- *
ELECTRIC wheelchairs , *ASSISTIVE technology , *TRAFFIC safety , *VISUAL fields , *WHEELCHAIRS , *LIDAR - Abstract
A system has been developed to convert manual wheelchairs into electric wheelchairs, providing assistance to users through the implemented algorithm, which ensures safe driving and obstacle avoidance. While manual wheelchairs are typically controlled indoors based on user preferences, they do not guarantee safe driving in areas outside the user's field of vision. The proposed model utilizes the dynamic window approach specifically designed for wheelchair use, allowing for obstacle avoidance. This method evaluates potential movements within a defined velocity space to calculate the optimal path, providing seamless and safe driving assistance in real time. This innovative approach enhances user assistance and safety by integrating state-of-the-art algorithms developed using the dynamic window approach alongside advanced sensor technology. With the assistance of LiDAR sensors, the system perceives the wheelchair's surroundings, generating real-time speed values within the algorithm framework to ensure secure driving. The model's ability to adapt to indoor environments and its robust performance in real-world scenarios underscore its potential for widespread application. This study has undergone various tests, conclusively proving that the system aids users in avoidance obstacles and ensures safe driving. These tests demonstrate significant improvements in maneuverability and user safety, highlighting a noteworthy advancement in assistive technology for individuals with limited mobility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Enhancing Deep Learning-Based Segmentation Accuracy through Intensity Rendering and 3D Point Interpolation Techniques to Mitigate Sensor Variability.
- Author
-
Kim, Myeong-Jun, Kim, Suyeon, Lee, Banghyon, and Kim, Jungha
- Subjects
- *
INTERPOLATION , *DETECTORS , *DEEP learning , *OBJECT recognition (Computer vision) , *LIDAR - Abstract
In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Integrating LiDAR Sensor Data into Microsimulation Model Calibration for Proactive Safety Analysis.
- Author
-
Igene, Morris, Luo, Qiyang, Jimee, Keshav, Soltanirad, Mohammad, Bataineh, Tamer, and Liu, Hongchao
- Subjects
- *
LIDAR , *OPTICAL radar , *REAL-time computing , *MICROSIMULATION modeling (Statistics) , *SIGNALIZED intersections , *DATA modeling , *ROAD interchanges & intersections , *PEDESTRIANS - Abstract
Studies have shown that vehicle trajectory data are effective for calibrating microsimulation models. Light Detection and Ranging (LiDAR) technology offers high-resolution 3D data, allowing for detailed mapping of the surrounding environment, including road geometry, roadside infrastructures, and moving objects such as vehicles, cyclists, and pedestrians. Unlike other traditional methods of trajectory data collection, LiDAR's high-speed data processing, fine angular resolution, high measurement accuracy, and high performance in adverse weather and low-light conditions make it well suited for applications requiring real-time response, such as autonomous vehicles. This research presents a comprehensive framework for integrating LiDAR sensor data into simulation models and their accurate calibration strategies for proactive safety analysis. Vehicle trajectory data were extracted from LiDAR point clouds collected at six urban signalized intersections in Lubbock, Texas, in the USA. Each study intersection was modeled with PTV VISSIM and calibrated to replicate the observed field scenarios. The Directed Brute Force method was used to calibrate two car-following and two lane-change parameters of the Wiedemann 1999 model in VISSIM, resulting in an average accuracy of 92.7%. Rear-end conflicts extracted from the calibrated models combined with a ten-year historical crash dataset were fitted into a Negative Binomial (NB) model to estimate the model's parameters. In all the six intersections, rear-end conflict count is a statistically significant predictor (p-value < 0.05) of observed rear-end crash frequency. The outcome of this study provides a framework for the combined use of LiDAR-based vehicle trajectory data, microsimulation, and surrogate safety assessment tools to transportation professionals. This integration allows for more accurate and proactive safety evaluations, which are essential for designing safer transportation systems, effective traffic control strategies, and predicting future congestion problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Analysis of the usability of daily LiDAR measurements performed using Apple devices.
- Author
-
Porzuc, Rafał and Kopniak, Piotr
- Subjects
MEASUREMENT errors ,MEASURING instruments ,LIDAR ,EVERYDAY life ,DETECTORS - Abstract
Copyright of Journal of Computer Sciences Institute is the property of Lublin University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
13. NSVDNet: Normalized Spatial-Variant Diffusion Network for Robust Image-Guided Depth Completion.
- Author
-
Zeng, Jin and Zhu, Qingpeng
- Subjects
THREE-dimensional imaging ,ARTIFICIAL neural networks ,BATHYMETRY ,SIGNAL processing - Abstract
Depth images captured by low-cost three-dimensional (3D) cameras are subject to low spatial density, requiring depth completion to improve 3D imaging quality. Image-guided depth completion aims at predicting dense depth images from extremely sparse depth measurements captured by depth sensors with the guidance of aligned Red–Green–Blue (RGB) images. Recent approaches have achieved a remarkable improvement, but the performance will degrade severely due to the corruption in input sparse depth. To enhance robustness to input corruption, we propose a novel depth completion scheme based on a normalized spatial-variant diffusion network incorporating measurement uncertainty, which introduces the following contributions. First, we design a normalized spatial-variant diffusion (NSVD) scheme to apply spatially varying filters iteratively on the sparse depth conditioned on its certainty measure for excluding depth corruption in the diffusion. In addition, we integrate the NSVD module into the network design to enable end-to-end training of filter kernels and depth reliability, which further improves the structural detail preservation via the guidance of RGB semantic features. Furthermore, we apply the NSVD module hierarchically at multiple scales, which ensures global smoothness while preserving visually salient details. The experimental results validate the advantages of the proposed network over existing approaches with enhanced performance and noise robustness for depth completion in real-use scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A Basic Optical Lidar-Based FPV Drone Sport Configuration
- Author
-
Dash, Saumik, Wahap, Ahmad Ridhwan, Abdullah, Fatin Aliah Phang, Wan Mohd Fuaad, Wan Mohd Farhan Bin, Omar, Syed Faris Syed, Pusppanathan, Jaysuman, Lovell, Nigel H., Advisory Editor, Oneto, Luca, Advisory Editor, Piotto, Stefano, Advisory Editor, Rossi, Federico, Advisory Editor, Samsonovich, Alexei V., Advisory Editor, Babiloni, Fabio, Advisory Editor, Liwo, Adam, Advisory Editor, Magjarevic, Ratko, Advisory Editor, Mohamed, Zulkifli, editor, Ngali, Mohd Zamani, editor, Sudin, Suhizaz, editor, Ibrahim, Mohamad Fauzi, editor, and Casson, Alexander, editor
- Published
- 2024
- Full Text
- View/download PDF
15. YOLOMM – You Only Look Once for Multi-modal Multi-tasking
- Author
-
Campos, Filipe, Cerqueira, Francisco Gonçalves, Cruz, Ricardo P. M., Cardoso, Jaime S., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vasconcelos, Verónica, editor, Domingues, Inês, editor, and Paredes, Simão, editor
- Published
- 2024
- Full Text
- View/download PDF
16. Mapping System Autonomous Electric Vehicle Based on Lidar-Sensor Using Hector SLAM Algorithm
- Author
-
SUPRAPTO Bhakti Yudho, DWIJAYANTI Suci, and WIJAYA Patrick Kesuma
- Subjects
autonomous electric vehicle ,hector slam algorithm ,lidar sensor ,mapping ,route ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Autonomous electric vehicles (EVs) need to recognize the surrounding environment through mapping. The mapping provides directions for driving in new locations and uncharted areas. However, few studies have discussed the mapping of unknown outdoor areas using light detection and ranging (LiDAR) with simultaneous localization and mapping (SLAM). LiDAR can reduce the limitations of GPS, which cannot track the current location, and it covers a limited area. Hence, this study used the Hector SLAM algorithm, which maps based on the data generated by LiDAR sensors. This study was conducted at the Universitas Sriwijaya using two routes: the Palembang and Inderalaya campuses. A comparison is made with the map on Google Maps to determine the accuracy of the algorithm. The route of the Palembang campus was divided into four points: A-B-C-D; route AB exhibits the highest accuracy of 85.7%. In contrast, the route of the Inderalaya campus was established by adding routes with buildings closer to the road. A marker point was allocated on the route: A-B-C-D-E; route CE exhibits the highest accuracy of 83.6%. Overall, this study shows that the Hector SLAM algorithm and LiDAR can be used to map the unknown environment of autonomous EVs.
- Published
- 2024
17. Geometric Feature Characterization of Apple Trees from 3D LiDAR Point Cloud Data
- Author
-
Md Rejaul Karim, Shahriar Ahmed, Md Nasim Reza, Kyu-Ho Lee, Joonjea Sung, and Sun-Ok Chung
- Subjects
smart agriculture ,LiDAR sensor ,point cloud data ,tree canopy volume ,tree recognition ,Photography ,TR1-1050 ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The geometric feature characterization of fruit trees plays a role in effective management in orchards. LiDAR (light detection and ranging) technology for object detection enables the rapid and precise evaluation of geometric features. This study aimed to quantify the height, canopy volume, tree spacing, and row spacing in an apple orchard using a three-dimensional (3D) LiDAR sensor. A LiDAR sensor was used to collect 3D point cloud data from the apple orchard. Six samples of apple trees, representing a variety of shapes and sizes, were selected for data collection and validation. Commercial software and the python programming language were utilized to process the collected data. The data processing steps involved data conversion, radius outlier removal, voxel grid downsampling, denoising through filtering and erroneous points, segmentation of the region of interest (ROI), clustering using the density-based spatial clustering (DBSCAN) algorithm, data transformation, and the removal of ground points. Accuracy was assessed by comparing the estimated outputs from the point cloud with the corresponding measured values. The sensor-estimated and measured tree heights were 3.05 ± 0.34 m and 3.13 ± 0.33 m, respectively, with a mean absolute error (MAE) of 0.08 m, a root mean squared error (RMSE) of 0.09 m, a linear coefficient of determination (r2) of 0.98, a confidence interval (CI) of −0.14 to −0.02 m, and a high concordance correlation coefficient (CCC) of 0.96, indicating strong agreement and high accuracy. The sensor-estimated and measured canopy volumes were 13.76 ± 2.46 m3 and 14.09 ± 2.10 m3, respectively, with an MAE of 0.57 m3, an RMSE of 0.61 m3, an r2 value of 0.97, and a CI of −0.92 to 0.26, demonstrating high precision. For tree and row spacing, the sensor-estimated distances and measured distances were 3.04 ± 0.17 and 3.18 ± 0.24 m, and 3.35 ± 0.08 and 3.40 ± 0.05 m, respectively, with RMSE and r2 values of 0.12 m and 0.92 for tree spacing, and 0.07 m and 0.94 for row spacing, respectively. The MAE and CI values were 0.09 m, 0.05 m, and −0.18 for tree spacing and 0.01, −0.1, and 0.002 for row spacing, respectively. Although minor differences were observed, the sensor estimates were efficient, though specific measurements require further refinement. The results are based on a limited dataset of six measured values, providing initial insights into geometric feature characterization performance. However, a larger dataset would offer a more reliable accuracy assessment. The small sample size (six apple trees) limits the generalizability of the findings and necessitates caution in interpreting the results. Future studies should incorporate a broader and more diverse dataset to validate and refine the characterization, enhancing management practices in apple orchards.
- Published
- 2024
- Full Text
- View/download PDF
18. A LiDAR-Based Backfill Monitoring System
- Author
-
Xingliang Xu, Pengli Huang, Zhengxiang He, Ziyu Zhao, and Lin Bi
- Subjects
paste backfill ,backfill visualization ,backfill simulation ,backfill progress ,LiDAR sensor ,point cloud processing ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
A backfill system in underground mines supports the walls and roofs of mined-out areas and improves the structural integrity of mines. However, there has been a significant gap in the visualization and monitoring of the backfill progress. To better observe the process of the paste backfill material filling the tunnels, a LiDAR-based backfill monitoring system is proposed. As long as the rising top surface of the backfill material enters the LiDAR range, the proposed system can compute the plane coefficient of this surface. The intersection boundary of the tunnel and the backfill material can be obtained by substituting the plane coefficient into the space where the initial tunnel is located. A surface point generation and slurry point determination algorithm are proposed to obtain the point cloud of the backfill body based on the intersection boundary. After Poisson surface reconstruction and volume computation, the point cloud model is reconstructed into a 3D mesh, and the backfill progress is digitized as the ratio of the backfill body volume to the initial tunnel volume. The volumes of the meshes are compared with the results computed by two other algorithms; the error is less than 1%. The time to compute a set of data increases with the amount of data, ranging from 8 to 20 s, which is sufficient to update a set of data with a tiny increase in progress. As the digitized results update, the visualization progress is transmitted to the mining control center, allowing unexpected problems inside the tunnel to be monitored and addressed based on the messages provided by the proposed system.
- Published
- 2024
- Full Text
- View/download PDF
19. Study on the Near-Distance Object-Following Performance of a 4WD Crop Transport Robot: Application of 2D LiDAR and Particle Filter
- Author
-
Eun-Seong Pak, Byeong-Hun Kim, Kil-Soo Lee, Yong-Chul Cha, and Hwa-Young Kim
- Subjects
LiDAR sensor ,target tracking ,particle filtering ,proportional–derivative controller ,agricultural produce transportation ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
In this paper, the development and performance evaluation of a 4WD robot system designed to follow near-distance moving objects using a 2D LiDAR sensor are presented. The study incorporates identifier (ID) classification and a distance-based dynamic angle of perception model to enhance the tracking capabilities of the 2D LiDAR sensor. A particle filter algorithm was utilized to verify the accuracy of object tracking. Furthermore, a proportional–derivative (PD) controller was designed and implemented to ensure the stability of the robot during operation. The experimental results demonstrate the potential applicability of these approaches in various industrial applications.
- Published
- 2024
- Full Text
- View/download PDF
20. Refining ICESAT-2 ATL13 Altimetry Data for Improving Water Surface Elevation Accuracy on Rivers.
- Author
-
Chen, Yun, Liu, Qihang, Ticehurst, Catherine, Sarker, Chandrama, Karim, Fazlul, Penton, Dave, and Sengupta, Ashmita
- Subjects
- *
ALTIMETRY , *ALTITUDES , *WATER levels , *HYDROLOGY , *PHOTONS - Abstract
The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect solely with water. It is acknowledged that the ATL13 product has noise elevations of the adjacent land, resulting in biased high mean WSEs at VSs. Earlier studies have relied on human intervention or water masks to resolve this. Both approaches are unsatisfactory solutions for large river basins where the issue becomes pronounced due to many tributaries and meanders. There is no automated procedure to partition the truly representative water height from the totality of the along-track ICESAT-2 photon segments (portions of photon points along a beam) for increasing precision of the mean WSE at VSs. We have developed an automated approach called "auto-segmentation". The accuracy of our method was assessed by comparing the ATL13-derived WSEs with direct water level observations at 10 different gauging stations on 37 different dates along the Lower Murray River, Australia. The concordance between the two datasets is significantly high and without detectable bias. In addition, we evaluated the effects of four methods for calculating the mean WSEs at VSs after auto-segmentation processing. Our results reveal that all methods perform almost equally well, with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189 m) and MAE (0.130–0.142 m). We also found that the R2, RMSE and MAE are better under the high flow condition (0.999, 0.124 and 0.111 m) than those under the normal-low flow condition (0.997, 0.208 and 0.160 m). Overall, our auto-segmentation method is an effective and efficient approach for deriving accurate mean WSEs at river VSs. It will contribute to the improvement of ICESAT-2 ATL13 altimetry data utility on rivers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. ENHANCING PEDESTRIAN SAFETY BY PROVIDING A LIDAR-BASED ANALYSIS OF JAYWALKING CONFLICTS AT SIGNALIZED INTERSECTIONS.
- Author
-
Ansariyar, Alireza, Taherpour, Abolfazl, Di Yang, and Jeihani, Mansoureh
- Subjects
INFRASTRUCTURE (Economics) ,SIGNALIZED intersections ,POISSON regression ,REGRESSION analysis ,TRAFFIC signs & signals - Abstract
Motives: In response to the inherent vulnerability of pedestrians in urban settings, this paper is driven by a commitment to enhancing their mobility and safety. Recognizing the prevalence of jaywalking as a significant concern, the study seeks practical solutions through the application of LiDAR sensors at signalized intersections. By delving into the complexities of jaywalking events and their contributing factors, the research aims to provide valuable insights that extend beyond mere statistical analysis. The motivations behind this endeavour lie in the imperative to comprehensively understand and address the risks associated with jaywalking, ultimately fostering a safer environment for pedestrians navigating urban crossroads. Aim: The primary aim of this paper is to assess and analyse the diverse factors influencing the frequency of jaywalking at signalized intersections, leveraging the capabilities of LiDAR sensors for safety applications. Through a meticulous examination of 1000 jaywalking events detected over a six-month period, the study aims to pinpoint highly correlated independent variables to the frequency of jaywalking events. These variables include traffic signal controller patterns, signal phases, vehicle-pedestrian conflicts, weather conditions, vehicle volume, walking patterns toward the median, pedestrian volume, and the unique jaywalker's ratio. Employing advanced statistical regression models, the research seeks to identify optimal models and unravel key insights into the nuanced dynamics of jaywalking behaviour. The overarching goal is to equip decision-makers and transportation specialists with data-driven knowledge, enabling them to implement targeted safety measures that mitigate pedestrian risks and enhance safety infrastructure at critical urban crossroads. Results: The outcomes of the study, derived from the optimal Poisson regression model, yield crucial insights into the multifaceted nature of jaywalking events at signalized intersections. The morning and mid-day signal controller patterns exhibit a substantial decrease of 44.7% and 34.4%, respectively, compared to the evening (PM) pattern, shedding light on temporal nuances in jaywalking behaviour. Additionally, the severity of vehicle-pedestrian conflicts escalates proportionally with the number of jaywalkers, emphasizing the importance of addressing pedestrian flow in mitigating potential conflicts. Notably, the presence of vegetation in the median emerges as a significant factor, significantly increasing the frequency of jaywalking. These results contribute to a nuanced understanding of the intricate interplay between environmental, temporal, and behavioural factors in jaywalking incidents. Decision-makers and transportation specialists can leverage these findings to formulate targeted safety interventions, fostering a safer pedestrian experience at crucial urban crossroads. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. 3D Lidar Target Detection Method at the Edge for the Cloud Continuum.
- Author
-
Li, Xuemei, Liu, Xuelian, Xie, Da, and Chen, Chong
- Abstract
In the internet of things, machine learning at the edge of cloud continuum is developing rapidly, providing more convenient services for design developers. The paper proposes a lidar target detection method based on scene density-awareness network for cloud continuum. The density-awareness network architecture is designed, and the context column feature network is proposed. The BEV density attention feature network is designed by cascading the density feature map with the spatial attention mechanism, and then connected with the BEV column feature network to generate the ablation BEV map. Multi-head detector is designed to regress the object center point, scale size and direction, and loss function is used for active supervision. The experiment is conducted on Alibaba Cloud services. On the validation dataset of KITTI, the 3D objects and BEV objects are detected and evaluated for three types of objects. The results show that most of the AP values of the density-awareness model proposed in this paper are higher than other methods, and the detection time is 0.09 s, which can meet the requirements of high accuracy and real-time of vehicle-borne lidar target detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Modeling of Motion Distortion Effect of Scanning LiDAR Sensors for Simulation-Based Testing
- Author
-
Arsalan Haider, Lukas Haas, Shotaro Koyama, Lukas Elster, Michael H. Kohler, Michael Schardt, Thomas Zeh, Hideo Inoue, Martin Jakobi, and Alexander W. Koch
- Subjects
LiDAR sensor ,motion distortion ,point cloud distortion ,false positive ,false negative ,open simulation interface ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Automated vehicles use light detection and ranging (LiDAR) sensors for environmental scanning. However, the relative motion between the scanning LiDAR sensor and objects leads to a distortion of the point cloud. This phenomenon is known as the motion distortion effect, significantly degrading the sensor’s object detection capabilities and generating false negative or false positive errors. In this work, we have introduced ray tracing-based deterministic and analytical approaches to model the motion distortion effect on the scanning LiDAR sensor’s performance for simulation-based testing. In addition, we have performed dynamic test drives at a proving ground to compare real LiDAR data with the motion distortion effect simulation data. The real-world scenarios, the environmental conditions, the digital twin of the scenery, and the object of interest (OOI) are replicated in the virtual environment of commercial software to obtain the synthetic LiDAR data. The real and the virtual test drives are compared frame by frame to validate the motion distortion effect modeling. The mean absolute percentage error (MAPE), the occupied cell ratio (OCR), and the Barons cross-correlation coefficient (BCC) are used to quantify the correlation between the virtual and the real LiDAR point cloud data. The results show that the deterministic approach matches the real measurements better than the analytical approach for the scenarios in which the yaw rate of the ego vehicle changes rapidly.
- Published
- 2024
- Full Text
- View/download PDF
24. Radio Environment Map Construction Based on Privacy-Centric Federated Learning
- Author
-
Shafi Ullah Khan, Carla E. Garcia, Taewoong Hwang, and Insoo Koo
- Subjects
Radio environment map (REM) ,coverage prediction ,received signal strength indicator (RSSI) ,LiDAR sensor ,federated learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In today’s digital age, coverage prediction is essential for optimizing wireless networks and improving user experience. While numerous path loss models and advanced machine learning algorithms have been developed to achieve high prediction performance, they predominantly operate within a centralized learning paradigm. While effective, this conventional approach often suffers from scalability and privacy limitations that are critical to the successful deployment of wireless maps. Conversely, in this paper, we propose a novel decentralized approach based on a federated learning long short-term memory (LSTM) model to accurately predict network coverage in indoor environments. The proposed FedLSTM is a method that allows multiple users, or clients, to train the model without sharing their personal data directly with a central server. In an experimental setup, we used real data collected from numerous clients moving along different paths. The FedLSTM model is evaluated in terms of root mean square error (RMSE), mean absolute error (MAE), and R2. Furthermore, compared to a centralized counterpart, FedLSTM shows a slight increase in RMSE from 2.4 dBm to 2.5 dBm and an increase in MAE from 1.7 dBm to 1.9 dBm. In addition, we evaluate the proposed FedLSTM considering variations in the number of participating clients and the number of local training epochs. The results show that even devices with limited computational power can meaningfully contribute to the training of the federated model, with fewer epochs achieving competitive results. Graphical analyses of the radio environment maps (REMs) generated by both FedLSTM and the centralized LSTM highlight their similarities. However, FedLSTM provides client privacy while reducing communication overhead and server strain.
- Published
- 2024
- Full Text
- View/download PDF
25. ContextNet: Leveraging Comprehensive Contextual Information for Enhanced 3D Object Detection
- Author
-
Caiyan Pei, Shuai Zhang, Lijun Cao, and Liqiang Zhao
- Subjects
Autonomous driving ,3D object detection ,LiDAR sensor ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The progress in object detection for autonomous driving using LiDAR point cloud data has been remarkable. However, current voxel-based two-stage detectors have not fully capitalized on the wealth of contextual information present in the point cloud data. Typically, Voxel Feature Encoding (VFE) layers tend to focus exclusively on internal voxel information, neglecting the broader context. Additionally, the process of extracting 3D proposal features through Region of Interest (ROI) spatial quantization and pooling downsampling results in a loss of spatial detail within the proposed regions. This limitation in capturing contextual details presents challenges for accurate object detection and positioning, particularly over long distances. In this paper, we propose ContextNet, which leverages comprehensive contextual information for enhanced 3D object detection. Specifically, it comprises two modules: the Voxel Self-Attention Encoding module (VSAE) and the Joint Channel Self-Attention Re-weight module (JCSR). VSAE establishes dependencies between voxels through self-attention, expanding the receptive field and introducing substantial contextual information. JCSR employs joint attention to extract both local channel information and global context information from the raw point cloud within the RoI region. By integrating these two sets of information and re-weighting the point features, the 3D proposal is refined, enabling a more accurate estimation of the object’s position and confidence. Extensive experiments conducted on the KITTI dataset demonstrate that our approach outperforms voxel-based two-stage methods, particularly with a 9.5% improvement in the mAP compared to the baseline on the nuScenes test dataset, and an improved 1.61% hard AP compared to the baseline on the KITTI benchmark.
- Published
- 2024
- Full Text
- View/download PDF
26. Body and Head Orientation Estimation From Low-Resolution Point Clouds in Surveillance Settings
- Author
-
Onur N. Tepencelik, Wenchuan Wei, Pamela C. Cosman, and Sujit Dey
- Subjects
Autism spectrum disorder ,body orientation ,head orientation ,LiDAR sensor ,point cloud ,triadic conversation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
We propose a system that estimates people’s body and head orientations using low-resolution point cloud data from two LiDAR sensors. Our models make accurate estimations in real-world conversation settings where subjects move naturally with varying head and body poses, while seated around a table. The body orientation estimation model uses ellipse fitting while the head orientation estimation model combines geometric feature extraction with an ensemble of neural network regressors. Our models achieve a mean absolute estimation error of 5.2 degrees for body orientation and 13.7 degrees for head orientation. Compared to other body/head orientation estimation systems that use RGB cameras, our proposed system uses LiDAR sensors to preserve user privacy, while achieving comparable accuracy. Unlike other body/head orientation estimation systems, our sensors do not require a specified close-range placement in front of the subject, enabling estimation from a surveillance viewpoint which produces low-resolution data. This work is the first to attempt head orientation estimation using point clouds in a low-resolution surveillance setting. We compare our model to two state-of-the-art head orientation estimation models that are designed for high-resolution point clouds, which yield higher estimation errors on our low-resolution dataset. We also present an application of head orientation estimation by quantifying behavioral differences between neurotypical and autistic individuals in triadic (three-way) conversations. Significance tests show that autistic individuals display significantly different behavior compared to neurotypical individuals in distributing attention between conversational parties, suggesting that the approach could be a component of a behavioral analysis or coaching system.
- Published
- 2024
- Full Text
- View/download PDF
27. STATISTICAL ANALYSIS OF JAYWALKING CONFLICTS BY A LIDAR SENSOR
- Author
-
Alireza ANSARIYAR and Mansoureh JEIHANI
- Subjects
lidar sensor ,jaywalking ,post encroachment time threshold (pet) ,vehicle-pedestrian conflicts ,safety ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Transportation engineering ,TA1001-1280 - Abstract
The light detection and ranging (Lidar) sensor is a remote sensing technology that can be used to monitor pedestrians who cross an intersection outside of a designated crosswalk or crossing area, which is a key safety application of lidar sensors at signalized intersections. Hereupon, the Lidar sensor was installed at the Hillen Rd - E 33rd St. intersection in Baltimore city to collect real-time jaywalkers’ traffic data. In order to propose safety improvement considerations for the pedestrians as one of the most vulnerable road users, the paper aims to investigate the reasons for jaywalking and its potential risks for increasing the frequency and severity of vehicle-pedestrian conflicts. In a three-month time, interval from December 2022 to February 2023, a total of 585 jaywalkers were detected. By developing a generalized linear regression model and using K-means clustering, the highly correlated independent variables to the frequency of jaywalking were recognized, including the speed of jaywalkers, the average PET of vehicle-pedestrians, the frequency of vehicle-pedestrian conflicts, and the weather condition. The volume of vehicles and pedestrians and road infrastructure characteristics such as medians, building entrances, vegetation on medians, and bus/taxi stops were investigated, and the results showed that as the frequency of jaywalking increases, vehicle-pedestrian conflicts will occur more frequently and with greater severity. In addition, jaywalking speed increases the likelihood of severe vehicle-pedestrian conflicts. Also, during cloudy and rainy days, 397 pedestrians were motivated to jaywalk (or 68% of total jaywalkers), making weather a significant factor in the increase in jaywalking.
- Published
- 2023
- Full Text
- View/download PDF
28. A LiDAR-based methodology for monitoring and collecting microscopic bicycle flow parameters on bicycle facilities.
- Author
-
Nateghinia, Ehsan, Beitel, David, Lesani, Asad, and Miranda-Moreno, Luis F.
- Subjects
CYCLING ,BICYCLES ,LIDAR ,COMPUTER systems ,TEST methods - Abstract
Research on microscopic bicycle flow parameters (speed, headway, spacing, and density) is limited given the lack of methods to collect data in large quantities automatically. This paper introduces a novel methodology to compute bicycle flow parameters based on a LiDAR system composed of two single-beam sensors. Instantaneous mid-block raw speed for each cyclist in the traffic stream is measured using LiDAR sensor signals at seven bidirectional and three unidirectional cycling facilities. A Multilayer Perception Neural Network is proposed to improve the accuracy of speed measures. The LiDAR system computes the headway and spacing between consecutive cyclists using time-stamped detections and speed values. Estimation of density is obtained using spacing. For model calibration and testing, 101 hours of video data collected at ten mid-block sites are used. The performance of the cyclist speed estimation is evaluated by comparing it to ground truth video. When the dataset is randomly split into training and test sets, the RMSE and MAPE of the speed estimation method on the test set are 0.61 m/s and 7.1%, respectively. In another scenario, when the model is trained with nine of the ten sites and tested on data from the remaining site, the RMSE and MAPE are 0.69 m/s and 8.2%, respectively. Lastly, the relationships governing hourly flow rate, average speed, and estimated density are studied. The data were collected during the peak cycling season at high-flow sites in Montreal, Canada; However, none of the facilities reached or neared capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Velocity Estimation from LiDAR Sensors Motion Distortion Effect.
- Author
-
Haas, Lukas, Haider, Arsalan, Kastner, Ludwig, Zeh, Thomas, Poguntke, Tim, Kuba, Matthias, Schardt, Michael, Jakobi, Martin, and Koch, Alexander W.
- Subjects
- *
MOTION detectors , *OBJECT recognition (Computer vision) , *MOTION , *OPTICAL radar , *LIDAR , *RELATIVE velocity - Abstract
Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object's velocity and direction of motion in the sensor's field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s−1 and a two-sigma confidence interval of [ − 0.0008 m s−1, 0.0017 m s−1] for the axis-wise estimation of an object's relative velocity, and an RMSE of 0.0815 m s−1 and a two-sigma confidence interval of [ 0.0138 m s−1, 0.0170 m s−1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Improved semantic segmentation network using normal vector guidance for LiDAR point clouds.
- Author
-
Minsung Kim, Inyoung Oh, Dongho Yun, and Kwanghee Ko
- Subjects
POINT cloud ,LIDAR ,OPTICAL radar ,FEATURE extraction ,DRIVERLESS cars - Abstract
As Light Detection and Ranging (LiDAR) sensors become increasingly prevalent in the field of autonomous driving, the need for accurate semantic segmentation of three-dimensional points grows accordingly. To address this challenge, we propose a novel network model that enhances segmentation performance by utilizing normal vector information. Firstly, we present a method to improve the accuracy of normal estimation by using the intensity and reflection angles of the light emitted from the LiDAR sensor. Secondly, we introduce a novel local feature aggregation module that integrates normal vector information into the network to improve the performance of local feature extraction. The normal information is closely related to the local structure of the shape of an object, which helps the network to associate unique features with corresponding objects. We propose four different structures for local feature aggregation, evaluate them, and choose the one that shows the best performance. Experiments using the SemanticKITTI dataset demonstrate that the proposed architecture outperforms both the baseline models, RandLA-Net, and other existing methods, achieving mean intersection over union of 57.9%. Furthermore, it shows highly competitive performance compared with RandLA-Net for small and dynamic objects in a real road environment. For example, it yielded 95.2% for cars, 47.4% for bicycles, 41.0% for motorcycles, 57.4% for bicycles, and 53.2% for pedestrians. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Industrial Application Use Cases of LiDAR Sensors Beyond Autonomous Driving
- Author
-
Poenicke, Olaf, Groneberg, Maik, Sopauschke, Daniel, Richter, Klaus, Treuheit, Nils, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Terzi, Sergio, editor, Madani, Kurosh, editor, Gusikhin, Oleg, editor, and Panetto, Hervé, editor
- Published
- 2023
- Full Text
- View/download PDF
32. Bayesian Approach for Static Object Detection and Localization in Unmanned Ground Vehicles
- Author
-
Doan, Luan Cong, Tran, Hoa Thi, Nguyen, Dung, Xhafa, Fatos, Series Editor, Dao, Nhu-Ngoc, editor, Thinh, Tran Ngoc, editor, and Nguyen, Ngoc Thanh, editor
- Published
- 2023
- Full Text
- View/download PDF
33. Ldetect, IOT Based Pothole Detector
- Author
-
Balakrishnan, Sumathi, Guan, Low Jun, Peng, Lee Yun, Juin, Tan Vern, Hussain, Manzoor, Sagaladinov, Sultan, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Peng, Sheng-Lung, editor, Jhanjhi, Noor Zaman, editor, Pal, Souvik, editor, and Amsaad, Fathi, editor
- Published
- 2023
- Full Text
- View/download PDF
34. Lidar Sensor for the Enhancement of the Architectural Heritage
- Author
-
Perticarini, Maurizio, Marzocchella, Valeria, Basso, Alessandro, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Villa, Daniele, editor, and Zuccoli, Franca, editor
- Published
- 2023
- Full Text
- View/download PDF
35. Validating Adverse Weather Influence on LiDAR with an Outdoor Rain Simulator
- Author
-
Brzozowski, Michał, Parczewski, Krzysztof, Kacprzyk, Janusz, Series Editor, Prentkovskis, Olegas, editor, Yatskiv (Jackiva), Irina, editor, Skačkauskas, Paulius, editor, Maruschak, Pavlo, editor, and Karpenko, Mykola, editor
- Published
- 2023
- Full Text
- View/download PDF
36. A Robust Vehicle Detection Model for LiDAR Sensor Using Simulation Data and Transfer Learning Methods
- Author
-
Kayal Lakshmanan, Matt Roach, Cinzia Giannetti, Shubham Bhoite, David George, Tim Mortensen, Manduhu Manduhu, Behzad Heravi, Sharadha Kariyawasam, and Xianghua Xie
- Subjects
transfer learning ,vehicle detection ,LiDAR sensor ,faster-RCNN ,synthetic LiDAR data generation ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. This paper uses simulated data and transfer learning to build a robust real-world model for vehicle detection and classification from single-beam LiDAR of a roadside parking scenario. The paper presents a synthetically augmented transfer learning approach for LiDAR-based vehicle detection and the implementation of synthetic LiDAR data. A synthetic augmented transfer learning method was used to supplement the small real-world data set and allow the development of data-handling techniques. In addition, adding the synthetically augmented transfer learning method increases the robustness and overall accuracy of the model. Experiments show that the method can be used for fast deployment of the model for vehicle detection using a LIDAR sensor.
- Published
- 2023
- Full Text
- View/download PDF
37. Detection and Navigation of Unmanned Vehicles in Wooded Environments Using Light Detection and Ranging Sensors.
- Author
-
Zhiwei Zhang, Jyun-Yu Jhang, and Cheng-Jian Lin
- Abstract
With the advancement of automatic navigation, navigation control has become an indispensable core technology in the movement of unmanned vehicles. In particular, research on navigation control in outdoor wooded environments, which are more complex, less controlled, and more unpredictable than indoor environments, has received widespread attention. To realize the movement control and obstacle avoidance of unmanned vehicles in unknown environments, in this study, we use light detection and ranging (LiDAR) sensors to sense the surrounding environment. By plane meshing the point cloud reflected from LiDAR, we can instantly establish feasible regions. At the same time, using the artificial potential field algorithm, a stable obstacle avoidance and navigation path is planned for use in an unknown environment. In an actual woods navigation experiment to evaluate our proposed LiDAR detection method, we used an independently developed unmanned vehicle with Ackermann steering geometry. Experimental results indicate that the proposed method can effectively detect obstacles. The accuracy requirement is within 30 cm from the navigation target, and the experimental results show that the average navigation success rate of the proposed method is as high as 85%. The experimental results demonstrate that the system can stably and safely navigate in scenarios with different unknown environments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Investigating the collected vehicle-pedestrian conflicts by a LIDAR sensor based on a new Post Encroachment Time Threshold (PET) classification at signalized intersections.
- Author
-
Ansariyar, A., Ardeshiri, A., and Jeihani, M.
- Subjects
- *
SIGNALIZED intersections , *LIDAR , *PEDESTRIANS , *OBJECT recognition (Computer vision) , *ROAD users , *DETECTORS - Abstract
LIDAR is one of the recent technologies that highlight objects, detects movements and manages object recognition of both non-stationary and stationary objects. LIDAR sensor is capable of recording traffic data including the number of passing vehicles, pedestrians, and bicyclists, the speed of vehicles, and the number of conflicts among different road users. LIDAR sensor collects high-density point clouds at high frequency, and it offers huge potential to capture details that otherwise would not have been possible. Hereupon, as a part of intelligent mobility, the Velodyne LIDAR sensor was installed at Cold Spring Ln - Hillen Rd intersection in Baltimore city to collect real-time traffic data that would be helpful to current road users. The obtained data by the LIDAR sensor was analyzed during the time interval from May 1st to August 31st, 2022. The daily vehicle volumes including passenger cars, buses, trucks, and also vulnerable road users including pedestrians and bicyclists were analyzed. The installed LIDAR sensor at Cold Spring Ln - Hillen Rd intersection records the Post Encroachment Time Threshold (PET) between two cars, vehicle-pedestrians, and car-bicyclists. PET shows the time between the departure of the encroaching vehicle/pedestrian from the conflict point and the arrival of the vehicle/pedestrian with the right-of-way at the conflict point. The frequency of vehicle-pedestrian conflicts and the severity of conflicts were analyzed and the results showed that 848 conflicts occurred during a period of four months. A new methodology for classifying PET values was proposed. Finally, a new risk index was suggested so that it considers the frequency and severity of conflicts, vehicle and pedestrian volumes, and the trajectories of all road users simultaneously. The safety MOE confirmed that safety considerations should be proposed for the western, and southern approaches to the intersection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Investigating the accuracy rate of vehicle-vehicle conflicts by LIDAR technology and microsimulation in VISSIM and AIMSUN.
- Author
-
Ansariyar, A. and Taherpour, A.
- Subjects
- *
MICRO air vehicles , *LIDAR , *STANDARD deviations , *OPTICAL radar , *REMOTE sensing , *SIGNALIZED intersections - Abstract
Light Detection and Ranging (LiDAR) technology is a remote sensing technique which can he applied to determine the spectral signature and differential position of objects emitting radiation. In order to collect realtime traffic data in a signalized intersection, the LIDAR sensor was installed at Cold Spring Ln - Hillen Rd intersection in Baltimore city, USA. The installed LIDAR sensor can record the "Post Encroachment Time Threshold (PET)" and Time-to-Collision (TTC) indicators as two principal safety measurements between two motorized vehicles (including car-car, car-bus, car-truck, and bus-truck). PET implies a potential danger, while TTC describes an imminent danger. The study aims to investigate the accuracy of obtained PET and TTC from the Surrogate Safety Assessment Model (SSAM), which is a software application developed by the FHWA, and compare it with the results obtained using LIDAR technology. SSAM is a free open-source software to perform statistical analysis of vehicle trajectory data output from microscopic traffic simulation models. Hereupon, the intersection was modeled in VISSIM and AIMSUN, and the outputs of vehicles trajectories by microsimulation were imported to SSAM software to compute a number of surrogate measures of safety for each conflict. The results highlighted that 857, 966, and 959 conflicts were obtained by LIDAR sensor, VISSIM, and AIMSUN respectively in the same time interval. The Root Mean Square Error (RMSE) measure was used for evaluating the accuracy rate, and the result showed that the TTC and PET values by the trajectory of AIMSUN are 34% and 26% more accurate than TTC and PET values by the trajectory of VISSIM, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Efficient Object Detection Using Semantic Region of Interest Generation with Light-Weighted LiDAR Clustering in Embedded Processors.
- Author
-
Jung, Dongkyu, Chong, Taewon, and Park, Daejin
- Subjects
- *
OBJECT recognition (Computer vision) , *LIDAR , *CONVOLUTIONAL neural networks , *POINT cloud , *MACHINE learning - Abstract
Many fields are currently investigating the use of convolutional neural networks to detect specific objects in three-dimensional data. While algorithms based on three-dimensional data are more stable and insensitive to lighting conditions than algorithms based on two-dimensional image data, they require more computation than two-dimensional data, making it difficult to drive CNN algorithms using three-dimensional data in lightweight embedded systems. In this paper, we propose a method to process three-dimensional data through a simple algorithm instead of complex operations such as convolution in CNN, and utilize its physical characteristics to generate ROIs to perform a CNN object detection algorithm based on two-dimensional image data. After preprocessing the LiDAR point cloud data, it is separated into individual objects through clustering, and semantic detection is performed through a classifier trained based on machine learning by extracting physical characteristics that can be utilized for semantic detection. The final object recognition is performed through a 2D-based object detection algorithm that bypasses the process of tracking bounding boxes by generating individual 2D image regions from the location and size of objects initially detected by semantic detection. This allows us to utilize the physical characteristics of 3D data to improve the accuracy of 2D image-based object detection algorithms, even in environments where it is difficult to collect data from camera sensors, resulting in a lighter system than 3D data-based object detection algorithms. The proposed model achieved an accuracy of 81.84% on the YOLO v5 algorithm on an embedded board, which is 1.92% higher than the typical model. The proposed model achieves 47.41% accuracy in an environment with 40% higher brightness and 54.12% accuracy in an environment with 40% lower brightness, which is 8.97% and 13.58% higher than the general model, respectively, and can achieve high accuracy even in non-optimal brightness environments. The proposed technique also has the advantage of reducing the execution time depending on the operating environment of the detection model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Exploring the Use of Particle and Kalman Filters for Obstacle Detection in Mobile Robots.
- Author
-
Gyenes, Zoltán, Bölöni, Ladislau, and Szádeczky-Kardoss, Emese Gincsainé
- Subjects
KALMAN filtering ,MOBILE robots ,LIDAR ,VELOCITY - Abstract
The present study aims to explore the adaptation of estimation methodologies, specifically Particle filters and Kalman filters, for the purpose of determining the position and velocity vector of obstacles within the operational workspace of mobile robots. These algorithms are commonly employed in the motion planning tasks of mobile robots for the estimation of their own position. The proposed methodology utilizes LiDAR sensor data to estimate the position vectors and calculate the velocity vectors of obstacles. Additionally, an uncertainty parameter can be determined using the introduced perception method. The performance of the newly adapted algorithms is evaluated through comparison of the absolute error in position and velocity vector estimations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Remarks on Geomatics Measurement Methods Focused on Forestry Inventory.
- Author
-
Pavelka, Karel, Matoušková, Eva, and Pavelka Jr., Karel
- Subjects
- *
FOREST management , *FORESTS & forestry , *OPTICAL scanners , *GEOMATICS , *AERIAL photogrammetry , *DRONE aircraft , *SECURE Sockets Layer (Computer network protocol) , *EMISSION inventories , *LOGGING - Abstract
This contribution focuses on a comparison of modern geomatics technologies for the derivation of growth parameters in forest management. The present text summarizes the results of our measurements over the last five years. As a case project, a mountain spruce forest with planned forest logging was selected. In this locality, terrestrial laser scanning (TLS) and terrestrial and drone close-range photogrammetry were experimentally used, as was the use of PLS mobile technology (personal laser scanning) and ALS (aerial laser scanning). Results from the data joining, usability, and economics of all technologies for forest management and ecology were discussed. ALS is expensive for small areas and the results were not suitable for a detailed parameter derivation. The RPAS (remotely piloted aircraft systems, known as "drones") method of data acquisition combines the benefits of close-range and aerial photogrammetry. If the approximate height and number of the trees are known, one can approximately calculate the extracted cubage of wood mass before forest logging. The use of conventional terrestrial close-range photogrammetry and TLS proved to be inappropriate and practically unusable in our case, and also in standard forestry practice after consultation with forestry workers. On the other hand, the use of PLS is very simple and allows you to quickly define ordered parameters and further calculate, for example, the cubic volume of wood stockpiles. The results from our research into forestry show that drones can be used to estimate quantities (wood cubature) and inspect the health status of spruce forests, However, PLS seems, nowadays, to be the best solution in forest management for deriving forest parameters. Our results are mainly oriented to practice and in no way diminish the general research in this area. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Integrating unmanned and manned UAVs data network based on combined Bayesian belief network and multi-objective reinforcement learning algorithm
- Author
-
Richard C. Millar, Leila Hashemi, Armin Mahmoodi, Robert Walter Meyer, and Jeremy Laliberte
- Subjects
trajectory optimization ,multi-objective reinforcement algorithm ,Bayesian belief network ,unmanned aerial vehicle (UAV) ,LIDAR sensor ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
This paper presents and assesses the feasibility and potential of a novel concept: the operation of multiple Unmanned Aerial Vehicles (UAVs) commanded and supported by a manned “Tender” air vehicle carrying a pilot and flight manager(s). The “Tender” is equipped to flexibly and economically monitor and manage multiple diverse UAVs over otherwise inaccessible terrain through wireless communication. The proposed architecture enables operations and analysis supported by the means to detect, assess, and accommodate change and hazards on the spot with effective human observation and coordination. Further, this paper seeks to find the optimal trajectories for UAVs to collect data from sensors in a predefined continuous space. We formulate the path-planning problem for a cooperative, and a diverse swarm of UAVs tasked with optimizing multiple objectives simultaneously with the goal of maximizing accumulated data within a given flight time within cloud data processing constraints as well as minimizing the probable imposed risk during UAVs mission. The risk assessment model determines risk indicators using an integrated Specific Operation Risk Assessment—Bayesian belief network approach, while its resultant analysis is weighted through the analytic hierarchy process ranking model. To this end, as the problem is formulated as a convex optimization model, and we propose a low complexity multi-objective reinforcement learning (MORL) algorithm with a provable performance guarantee to solve the problem efficiently. We show that the MORL architecture can be successfully trained and allows each UAV to map each observation of the network state to an action to make optimal movement decisions. This proposed network architecture enables the UAVs to balance multiple objectives. Estimated MSE measures show that the algorithm produced decreasing errors in the learning process with increasing epoch number.
- Published
- 2023
- Full Text
- View/download PDF
44. A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors.
- Author
-
Haider, Arsalan, Pigniczki, Marcell, Koyama, Shotaro, Köhler, Michael H., Haas, Lukas, Fink, Maximilian, Schardt, Michael, Nagase, Koji, Zeh, Thomas, Eryildirim, Abdulkadir, Poguntke, Tim, Inoue, Hideo, Jakobi, Martin, and Koch, Alexander W.
- Subjects
- *
AUTOMOTIVE sensors , *OPTICAL radar , *LIDAR , *TIME-domain analysis , *MIE scattering , *FOG - Abstract
In this work, we introduce a novel approach to model the rain and fog effect on the light detection and ranging (LiDAR) sensor performance for the simulation-based testing of LiDAR systems. The proposed methodology allows for the simulation of the rain and fog effect using the rigorous applications of the Mie scattering theory on the time domain for transient and point cloud levels for spatial analyses. The time domain analysis permits us to benchmark the virtual LiDAR signal attenuation and signal-to-noise ratio (SNR) caused by rain and fog droplets. In addition, the detection rate (DR), false detection rate (FDR), and distance error d e r r o r of the virtual LiDAR sensor due to rain and fog droplets are evaluated on the point cloud level. The mean absolute percentage error (MAPE) is used to quantify the simulation and real measurement results on the time domain and point cloud levels for the rain and fog droplets. The results of the simulation and real measurements match well on the time domain and point cloud levels if the simulated and real rain distributions are the same. The real and virtual LiDAR sensor performance degrades more under the influence of fog droplets than in rain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Statistical analysis of vehicle-vehicle conflicts with a LIDAR sensor in a signalized intersection.
- Author
-
Ansariyar, A. and Taherpour, A.
- Subjects
- *
SIGNALIZED intersections , *ROAD interchanges & intersections , *LIDAR , *K-means clustering , *STATISTICS , *ROAD users - Abstract
LIDAR sensor is capable of recording traffic data including the number of passing vehicles, pedestrians, and bicyclists, the speed of vehicles, and the number of conflicts among different road users. As part of intelligent mobility and in order to collect real-time traffic data and investigate the safety of different road users, particularly motorized vehicles and pedestrians in different approaches of the intersection, a LIDAR sensor was installed at Cold Spring Ln - Hillen Rd intersection in Baltimore city. The intersection was chosen based on substantial vehicle traffic, proximity to Morgan State University, and the history of recent crashes. One of the efficient capabilities of the installed LIDAR sensor is recording the "Post Encroachment Threshold (PET)" between two vehicles, vehicle-pedestrian, and vehicle-bicyclists. The paper aims to concentrate on vehiclevehicle (including car-car, car-bus, car-truck, and bus-truck) conflicts. The frequency and severity of conflicts were analyzed and the results highlighted that 857 conflicts were recorded during a 5-month time interval. The frequency and severity of conflicts were investigated by three methods including leading-following vehicles, right-turn and left-turn movements, and in different phases of traffic signal. The critical zones were recognized, and K-means clustering was performed on severity of conflicts subcategories while selecting the optimal number of clustering. The significant error of each subcategory was specified, and the error values were compared with Bonferroni test error values. The results demonstrated that the error values from k-means clustering are consistent with the error values from the Bonferroni test. Furthermore, the statistical analysis showed that: 1) In terms of conflict frequency and severity, the eastern and middle zones are crucial 2) all errors are consistent with 95% confidence interval, and 3) the severity of conflicts by different movements has a significant correlation with speed at conflict point, hourly time interval, weather, and the entrance vehicle volume to each zone. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Lazer tabanlı sensörler kullanılarak rüzgâr hızı ve yönü ölçüm cihazı tasarımı.
- Author
-
Işıklı, İbrahim, Köse, Bayram, and Sağbaş, Mehmet
- Subjects
- *
NOISE control , *LASERS , *WIND speed , *ANEMOMETER , *ERROR analysis in mathematics - Abstract
Different types of anemometers and direction determining devices are used in measuring wind speed and direction. Especially, low-cost and accurate measuring devices are required for the production and estimates of electricity generated from wind energy. Therefore, a design for an anemometer has been carried out using appropriate lowcost laser distance sensors, microcontrollers, and integrated electronic circuit equipment. The resulting anemometer from the design works based on the distance from the measurement center that changes according to wind speed and direction. The wind speed and direction data are obtained by digitally calculating this distance. By following a different approach than the existing techniques in measuring wind speed and direction, the data obtained in this method was compared with a real anemometer and an error analysis was performed. The relative error value was found to be 0.01685 as a result of the error analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Adaptive path planning for unknown environment monitoring.
- Author
-
Gomathi, Nandhagopal and Rajathi, Krishnamoorthi
- Subjects
MOBILE robots ,OPTICAL radar ,LIDAR ,ROBOT design & construction - Abstract
The purpose of this paper is to offer a unique adaptive path planning framework to address a new challenge known as the Unknown environment Persistent Monitoring Problem (PMP). To identify the unknown events' occurrence location and likelihood, an unmanned ground vehicle (UGV) equipped with a Light Detection and Ranging (LIDAR) and camera is used to record such events in agriculture land. A certain level of detecting capability must be the distinct monitoring priority in order to keep track of them to a certain distance. First, to formulate a model, we developed an event-oriented modelling strategy for unknown environment perception and the effect is enumerated by uncertainty, which takes into account the sensor's detection capabilities, the detection interval, and monitoring weight. A mobile robot scheme utilizing LIDAR on integrative approach was created and experiments were carried out to solve the high equipment budget of Simultaneous Localization and Mapping (SLAM) for robotic systems. To map an unfamiliar location using the robotic operating system (ROS), the 3D visualization tool for Robot Operating System (RVIZ) was utilized, and GMapping software package was used for SLAM usage. The experimental results suggest that the mobile robot design pattern is viable to produce a high-precision map while lowering the cost of the mobile robot SLAM hardware. From a decision-making standpoint, we built a hybrid algorithm HSAStar (Hybrid SLAM & A Star) algorithm for path planning based on the event oriented modelling, allowing a UGV to continually monitor the perspectives of a path. The simulation results and analyses show that the proposed strategy is feasible and superior. The performance of the proposed hyb SLAM-A Star-APP method provides 34.95%, 27.38%, 33.21% and 29.68% lower execution time, 26.36%, 29.64% and 29.67% lower map duration compared with the existing methods, such as ACO-APF-APP, APFA-APP, GWO-APP and PSO-APP. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. A Robust Vehicle Detection Model for LiDAR Sensor Using Simulation Data and Transfer Learning Methods.
- Author
-
Lakshmanan, Kayal, Roach, Matt, Giannetti, Cinzia, Bhoite, Shubham, George, David, Mortensen, Tim, Manduhu, Manduhu, Heravi, Behzad, Kariyawasam, Sharadha, and Xie, Xianghua
- Subjects
VEHICLE models ,LIDAR ,DOPPLER lidar ,DETECTORS ,AUTOMOBILE parking - Abstract
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. This paper uses simulated data and transfer learning to build a robust real-world model for vehicle detection and classification from single-beam LiDAR of a roadside parking scenario. The paper presents a synthetically augmented transfer learning approach for LiDAR-based vehicle detection and the implementation of synthetic LiDAR data. A synthetic augmented transfer learning method was used to supplement the small real-world data set and allow the development of data-handling techniques. In addition, adding the synthetically augmented transfer learning method increases the robustness and overall accuracy of the model. Experiments show that the method can be used for fast deployment of the model for vehicle detection using a LIDAR sensor. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. STATISTICAL ANALYSIS OF JAYWALKING CONFLICTS BY A LIDAR SENSOR.
- Author
-
ANSARIYAR, Alireza and JEIHANI, Mansoureh
- Subjects
PEDESTRIANS ,LIDAR ,OPTICAL radar ,REMOTE sensing ,DETECTORS ,SIGNALIZED intersections - Abstract
The light detection and ranging (Lidar) sensor is a remote sensing technology that can be used to monitor pedestrians who cross an intersection outside of a designated crosswalk or crossing area, which is a key safety application of lidar sensors at signalized intersections. Hereupon, the Lidar sensor was installed at the Hillen Rd - E 33rd St. intersection in Baltimore city to collect real-time jaywalkers' traffic data. In order to propose safety improvement considerations for the pedestrians as one of the most vulnerable road users, the paper aims to investigate the reasons for jaywalking and its potential risks for increasing the frequency and severity of vehicle-pedestrian conflicts. In a three-month time, interval from December 2022 to February 2023, a total of 585 jaywalkers were detected. By developing a generalized linear regression model and using K-means clustering, the highly correlated independent variables to the frequency of jaywalking were recognized, including the speed of jaywalkers, the average PET of vehicle-pedestrians, the frequency of vehicle-pedestrian conflicts, and the weather condition. The volume of vehicles and pedestrians and road infrastructure characteristics such as medians, building entrances, vegetation on medians, and bus/taxi stops were investigated, and the results showed that as the frequency of jaywalking increases, vehicle-pedestrian conflicts will occur more frequently and with greater severity. In addition, jaywalking speed increases the likelihood of severe vehicle-pedestrian conflicts. Also, during cloudy and rainy days, 397 pedestrians were motivated to jaywalk (or 68% of total jaywalkers), making weather a significant factor in the increase in jaywalking. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. The Design and Development of a Foot-Detection Approach Based on Seven-Foot Dimensions: A Case Study of a Virtual Try-On Shoe System Using Augmented Reality Techniques.
- Author
-
Kaewrat, Charlee, Boonbrahm, Poonpong, and Sahoh, Bukhoree
- Subjects
FOOT ,AUGMENTED reality ,OPTICAL radar ,LIDAR ,FOOT injuries ,DIABETIC foot ,SHOES - Abstract
Unsuitable shoe shapes and sizes are a critical reason for unhealthy feet, may severely contribute to chronic injuries such as foot ulcers in susceptible people (e.g., diabetes patients), and thus need accurate measurements in the manner of expert-based procedures. However, the manual measure of such accurate shapes and sizes is labor-intensive, time-consuming, and impractical to apply in a real-time system. This research proposes a foot-detection approach using expert-like measurements to address this concern. It combines the seven-foot dimensions model and the light detection and ranging sensor to encode foot shapes and sizes and detect the dimension surfaces. The graph-based algorithms are developed to present seven-foot dimensions and visualize the shoe's model based on the augmented reality (AR) technique. The results show that our approach can detect shapes and sizes more effectively than the traditional approach, helps the system imitate expert-like measurements accurately, and can be employed in intelligent applications for susceptible people-based feet measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.