306 results on '"Laser radar"'
Search Results
2. Mapping of Pavement Conditions Using Smartphone/Tablet LiDAR Case Study: Sensor Performance Comparison [Research Brief]
- Abstract
This project explores the application of the Apple iPad Pro and Apple iPhone equipped with LiDAR technology using free apps that utilize the LiDAR sensor in a more traditional Geomatics Engineering method. It relies heavily on the IMU (inertial measurement unit), the integrated camera, and the single frequency GNSS (global navigation satellite system) sensor for positioning of the device to control the resultant collected data sets. A terrestrial LiDAR scanner, Leica P20, is used to produce the base surface model for comparison.
- Published
- 2024
3. Mapping of Pavement Conditions Using Smartphone/Tablet LiDAR Case Study: Sensor Performance Comparison
- Abstract
Poor road conditions affect millions of drivers, and assessing the condition of paved surfaces is a critical step towards repairing them. This project explores the feasibility of using the Apple iPad Pro LiDAR sensor as a cost-effective tool for assessing the damage and condition of paved surfaces. Our research aims to provide accurate and precise measurements using readily available consumer devices and compare the results to state-of-the-art equipment. This investigation involved visual inspection, identification, and classification of pavement distresses, followed by a comparison of the iPad and iPhone LiDAR data with a survey-grade terrestrial laser scanner. The project revealed several limitations of the iPad Pro-based LiDAR approach. The level of detail captured in the scans was relatively low, with a best-case resolution of 1 cm and an inability to detect smaller cracks and shallow potholes. Longer scans (in terms of both time and distance) led to geometric anomalies in the surface models. Colorized scans provided some visual contrast, aiding in the identification of damage, particularly on moderately damaged concrete surfaces. The potential sources of error were identified, including the performance of the Inertial Measurement Unit (IMU), the limitations of the LiDAR sensor itself, and the opaque nature of onboard data processing within the 3D Scanner App. Suggestions for improvement included the use of gimbal stabilizers to enhance scan quality and the exploration of more intensive PC-based processing for raw data analysis. Hardware advancements by Apple and software enhancements by app developers were also highlighted as potential areas for future improvement. While the project revealed limitations and challenges, the authors acknowledge the possibility of future hardware upgrades, augmented reality advancements, and improvements in sensor accuracy and processing. However, based on this project’s findings, the iPad Pro LiDAR approach currently falls short
- Published
- 2024
4. Traffic Sign Extraction From Mobile LiDAR Point Cloud
- Abstract
The extraction of traffic signs from Mobile Light Detection and Ranging (LiDAR) point cloud data has become a focal point in transportation research due to the increasing integration of LiDAR technologies. LiDAR, a remote sensing technology, captures detailed three-dimensional point cloud data, offering a comprehensive view of the surrounding environment. Mobile LiDAR systems mounted on vehicles enable efficient data collection, particularly for large-scale road networks. This study aims to develop and refine techniques for extracting traffic signs from Mobile LiDAR point cloud data, essential for enhancing road safety, navigation systems, and intelligent transportation solutions. By leveraging LiDAR technology, new possibilities for automating traffic sign recognition and mapping emerge. The research focuses on detecting traffic signs using Mobile LiDAR point cloud data, employing an intensity-based sign extraction method to identify traffic signs, traffic signals, and other retro-reflective objects. The workflow involves managing LiDAR Aerial Survey (LAS) datasets, including tasks such as merging/splitting, gridding, and detecting high-intensity features. Identified signs are visualized in Google Earth Pro, facilitating their display in Geographic Information Systems (GIS). Furthermore, the study explores point density analysis, establishing connections with potential grid resolutions for additional extraction or analysis, such as road condition assessments or crack detection.
- Published
- 2024
5. Enhancing Vehicle Sensing for Traffic Safety and Mobility Performance Improvements Using Roadside LiDAR Sensor Data
- Published
- 2024
6. Incorporating Use Inspired Design in Providing Safe Transportation Infrastructure for RITI Communities
- Abstract
In this study, we focus on automating road marking extraction from the HDOT MLS point cloud database, managed by Mandli. Mandli is a company specializing in highway data collection, including LiDAR. Mandli has cooperated with various Department of Transportation throughout the United States. Here, we focus on infrastructure elements related to non-motorized travel modes, supporting the ongoing Complete Streets efforts in Hawaii. Point cloud data include different colors that represent differences in elevation and intensity values. Based on a visual inspection, road markings can be observed within these point clouds. The long-term objective of this study is to develop a framework and approach for automating the detection of these infrastructure elements, based on deep learning approaches. For this project, a YOLOv5 (You Only Look Once version 5) image object detection model was trained with the HDOT point cloud data. YOLO is a family of deep learning models designed for fast object detection; the latest published version is the 5th version. The focus here is on non-motorized objects, such as crosswalks, bike lanes and bike boxes. The same approach can be extended to other markings as well, which we plan for subsequent studies.
- Published
- 2024
7. Domain Adaptive LiDAR Point Cloud Segmentation via Density-Aware Self-Training
- Author
-
Xiao, Aoran, Huang, Jiaxing, Liu, Kangcheng, Guan, Dayan, Zhang, Xiaoqin, Lu, Shijian, Xiao, Aoran, Huang, Jiaxing, Liu, Kangcheng, Guan, Dayan, Zhang, Xiaoqin, and Lu, Shijian
- Abstract
Domain adaptive LiDAR point cloud segmentation aims to learn a target segmentation model from labeled source point clouds and unlabelled target point clouds, which has recently attracted increasing attention due to various challenges in point cloud annotation. However, its performance is still very constrained as most existing studies did not well capture data-specific characteristics of LiDAR point clouds. Inspired by the observation that the domain discrepancy of LiDAR point clouds is highly correlated with point density, we design a density-aware self-training (DAST) technique that introduces point density into the self-training framework for domain adaptive point cloud segmentation. DAST consists of two novel and complementary designs. The first is density-aware pseudo labelling that introduces point density for accurate pseudo labelling of target data and effective self-supervised network retraining. The second is density-aware consistency regularization that encourages to learn density-invariant representations by enforcing target predictions to be consistent across points of different densities. Extensive experiments over multiple large-scale public datasets show that DAST achieves superior domain adaptation performance as compared with the state-of-the-art. IEEE
- Published
- 2024
8. TAIL: A Terrain-Aware Multi-Modal SLAM Dataset for Robot Locomotion in Deformable Granular Environments
- Author
-
Yao, Chen, Ge, Yangtao, Shi, Guowei, Wang, Zirui, Yang, Ningbo, Zhu, Zheng, Wei, Hexiang, Zhao, Yuntian, Wu, Jing, Jia, Zhenzhong, Yao, Chen, Ge, Yangtao, Shi, Guowei, Wang, Zirui, Yang, Ningbo, Zhu, Zheng, Wei, Hexiang, Zhao, Yuntian, Wu, Jing, and Jia, Zhenzhong
- Abstract
Terrain-aware perception holds the potential to improve the robustness and accuracy of autonomous robot navigation in the wilds, thereby facilitating effective off-road traversals. However, the lack of multi-modal perception across various motion patterns hinders the solutions of Simultaneous Localization And Mapping (SLAM), especially when confronting non-geometric hazards in demanding landscapes. In this paper, we first propose a Terrain-Aware multI-modaL (TAIL) dataset tailored to deformable and sandy terrains. It incorporates various types of robotic proprioception and distinct ground interactions for the unique challenges and benchmark of multi-sensor fusion SLAM. The versatile sensor suite comprises stereo frame cameras, multiple ground-pointing RGB-D cameras, a rotating 3D LiDAR, an IMU, and an RTK device. This ensemble is hardware-synchronized, well-calibrated, and self-contained. Utilizing both wheeled and quadrupedal locomotion, we efficiently collect comprehensive sequences to capture rich unstructured scenarios. It spans the spectrum of scope, terrain interactions, scene changes, ground-level properties, and dynamic robot characteristics. We benchmark several state-of-the-art SLAM methods against ground truth and provide performance validations. Corresponding challenges and limitations are also reported. All associated resources are accessible upon request at
https://tailrobot.github.io/ . IEEE- Published
- 2024
9. LIV-GaussMap: LiDAR-Inertial-Visual Fusion for Real-time 3D Radiance Field Map Rendering
- Author
-
Hong, Sheng, He, Junjie, Zheng, Xinhu, Zheng, Chunran, Shen, Shaojie, Hong, Sheng, He, Junjie, Zheng, Xinhu, Zheng, Chunran, and Shen, Shaojie
- Abstract
We introduce an integrated precise LiDAR, Inertial, and Visual (LIV) multimodal sensor fused mapping system that builds on the differentiable Gaussians to improve the mapping fidelity, quality, and structural accuracy. Notably, this is also a novel form of tightly coupled map for LiDARvisual- inertial sensor fusion. This system leverages the complementary characteristics of LiDAR and visual data to capture the geometric structures of large-scale 3D scenes and restore their visual surface information with high fidelity. The initialization for the scene's surface Gaussians and the sensor's poses of each frame are obtained using a LiDAR-inertial system with the feature of size-adaptive voxels. Then, we optimized and refined the Gaussians using visual-derived photometric gradients to optimize their quality and density. Our method is compatible with various types of LiDAR, including solid-state and mechanical LiDAR, supporting both repetitive and non-repetitive scanning modes. Bolstering structure construction through LiDAR and facilitating real-time generation of photorealistic renderings across diverse LIV datasets. It showcases notable resilience and versatility in generating real-time photorealistic scenes potentially for digital twins and virtual reality, while also holding potential applicability in real-time SLAM and robotics domains. We release our software and hardware and self-collected datasets on Github1 to benefit the community. IEEE
- Published
- 2024
10. 3D-OutDet : A Fast and Memory Efficient Outlier Detector for 3D LiDAR Point Clouds in Adverse Weather
- Author
-
Raisuddin, Abu Mohammed, Cortinhal, Tiago, Holmblad, Jesper, Aksoy, Eren Erdal, Raisuddin, Abu Mohammed, Cortinhal, Tiago, Holmblad, Jesper, and Aksoy, Eren Erdal
- Abstract
Adverse weather conditions such as snow, rain, and fog are natural phenomena that can impair the performance of the perception algorithms in autonomous vehicles. Although LiDARs provide accurate and reliable scans of the surroundings, its output can be substantially degraded by precipitation (e.g., snow particles) leading to an undesired effect on the downstream perception tasks. Several studies have been performed to battle this undesired effect by filtering out precipitation outliers, however, these works have large memory consumption and long execution times which are not desired for onboard applications. To that end, we introduce a novel outlier detector for 3D LiDAR point clouds captured under adverse weather conditions. Our proposed detector 3D-OutDet is based on a novel convolution operation that processes nearest neighbors only, allowing the model to capture the most relevant points. This reduces the number of layers, resulting in a model with a low memory footprint and fast execution time, while producing a competitive performance compared to state-of-the-art models. We conduct extensive experiments on three different datasets (WADS, SnowyKITTI, and SemanticSpray) and show that with a sacrifice of 0.16% mIOU performance, our model reduces the memory consumption by 99.92%, number of operations by 96.87%, and execution time by 82.84% per point cloud on the real-scanned WADS dataset. Our experimental evaluations also showed that the mIOU performance of the downstream semantic segmentation task on WADS can be improved up to 5.08% after applying our proposed outlier detector. We release our source code, supplementary material and videos in https://sporsho.github.io/3DOutDet. Upon clicking the link you will have to option to go to source code, see supplementary information and view videos generated with our 3D-OutDet. © 2024 IEEE.
- Published
- 2024
- Full Text
- View/download PDF
11. A Statistical and Machine Learning Approach to Assess Contextual Complexity of the Driving Environment Using Autonomous Vehicle Data— Technology Transfer Activities
- Published
- 2024
12. Assessment of Contextual Complexity and Risk Using Unsupervised Clustering Approaches with Dynamic Traffic Condition Data Obtained from Autonomous Vehicles
- Published
- 2024
13. Securing Deep Learning Against Adversarial Attacks for Connected and Automated Vehicles
- Published
- 2024
14. Rapid and Accurate Assessment of Road Damage by Integrating Data from Mobile Camera Systems (MCS) and Mobile LiDAR Systems (MLS) [supporting dataset]
- Published
- 2024
15. Develop a Methodology for Pavement Drainage System Rating: Research Project Capsule [24–2P]
- Published
- 2024
16. Rapid and Accurate Assessment of Road Damage by Integrating Data from Mobile Camera Systems (MCS) and Mobile LiDAR Systems (MLS)
- Published
- 2024
17. Culvert/Storm Drain Evaluation Technologies
- Published
- 2024
18. DCL-SLAM: A Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm
- Author
-
Zhong, Shipeng, Qi, Yuhua, Chen, Zhiqiang, Wu, Jin, Chen, Hongbo, Liu, Ming, Zhong, Shipeng, Qi, Yuhua, Chen, Zhiqiang, Wu, Jin, Chen, Hongbo, and Liu, Ming
- Abstract
To execute collaborative tasks in unknown environments, a robotic swarm must establish a global reference frame and locate itself in a shared understanding of the environment. However, it faces many challenges in real-world scenarios, such as the prior information about the environment being absent and poor communication among the team members. This work presents DCL-SLAM, a front-end agnostic fully distributed collaborative LiDAR SLAM framework to co-localize in an unknown environment with low information exchange. Based on peer-to-peer communication, DCL-SLAM adopts the lightweight LiDAR-Iris descriptor for place recognition and does not require full team connectivity. DCL-SLAM includes three main parts: a replaceable single-robot front-end LiDAR odometry; a distributed loop closure module that detects overlaps between robots; and a distributed back-end module that adapts distributed pose graph optimizer combined with rejecting spurious loop measurements. We integrate the proposed framework with diverse open-source LiDAR odometry to show its versatility. The proposed system is extensively evaluated on benchmarking datasets and field experiments over various scales and environments. Experimental results show that DCL-SLAM achieves higher accuracy and lower bandwidth than other state-of-the-art multi-robot LiDAR SLAM systems. The source code and video demonstration are available at https://github.com/PengYu-Team/DCL-SLAM. IEEE
- Published
- 2024
19. Applying UAS LiDAR for Developing Small Project Terrain Models
- Abstract
The work described in this report assessed the accuracy of using unmanned aerial systems (UAS) LiDAR on small bridge replacement projects, compared the accuracy of UAS LiDAR and conventional surveying techniques (global navigation satellite system real time kinematics and total station), and discussed the cost savings of different surveying methods. The study considered five small bridge projects where UAS LiDAR was flown, and conventional surveying checkpoints were measured in the LiDAR survey area. The results indicate that for hard surfaces, UAS LiDAR is generally accurate to within -1.0 inch to +1.0 inch, with variations observed across the different sites. Soft surfaces, particularly grass, exhibited LiDAR overestimation ranging from 0.0 to +2.0 inches and up to +3.0 inches at the Humnoke site. Tall grass and tree checkpoints demonstrated larger errors, with variations among the sites. Overall, the root mean squared errors for the different surface types ranged from 0.5 to 7.0 inches, with asphalt having the lowest error and trees having the highest. The study concludes that UAS LiDAR provides good accuracy for hard surfaces, with expected larger errors for soft surfaces, and offers cost benefits over alternative surveying methods. Comparative cost analyses revealed that UAS LiDAR is approximately $1,195.41 less expensive per project than helicopter LiDAR and $10,539.18 less expensive per bridge project compared to conventional surveys, resulting in a 20 and 25 percent cost reduction, respectively. Despite having slightly less accuracy for soft surfaces, the cost effectiveness of UAS LiDAR makes it a favorable choice for small-area bridge projects.
- Published
- 2023
20. Prototyp av en intelligent transportvagn
- Author
-
Rasooly, Mikael and Rasooly, Mikael
- Abstract
Denna rapport presenterar en prototyp av en eldriven intelligent transportvagn som konstruerats vid ett projekt vid Institutionen för tillämpad fysik och elektronik på Umeå universitet. Projektet genomfördes i syfte att visa att det går att bygga en semi-autonom transportvagn som kan följa efter bakom och vid sidan av en person. I förlängningen är det tänkt att konceptet ska kunna användas för att effektivisera företag genom att möjliggöra automatisk transport av gods. För prototypen användes en eldriven minibil som utrustades med en Raspberry Pi 3, laser radar, och ett tröghetspaket. För komponenterna och minibilens motorstyrkort användes utvecklingsmiljön CODESYS för att utveckla och implementera drivrutiner och algoritmer. För att möjliggöra fjärrstyrning och kommunikation med transportvagnen skapades ett användargränssnitt i CODESYS. Resultatet är en eldriven transportvagn som kan söka och följa efter bakom och vid sidan av en person. I ett användargränssnitt kan transportvagnen fjärrstyras med funktioner som manuell styrning, växla mellan manuell och autonom styrning, växla mellan att följa bakom eller vid sidan av en person, utföra en transition där vagnen förflyttar sig från att vara bakom till vid sidan av en person, och kalibrera styrvinkel. I gränssnittet ses transportvagnens hastighet, styrvinkel, signallampor för kollision, avståndsinformation om personen som vagnen följer, data från tröghetspaketet, och en rutkarta över transportvagnens omgivning uppbyggt av laser radar-data. Konceptet har validerats via fyra testscenarion: Följa bakom en person inomhus, följa vid sidan av en person, transition från bakom till vid sidan av en person, och följa bakom en person utomhus. Projektet visar att det går att bygga en prototyp på en eldriven intelligent transportvagn som kan följa en person både inomhus och utomhus, och att det är ett tillfälle för företag att investera i intelligenta drivlinor., This report presents a prototype of an electric intelligent transport carrier designed in a project at the Department of Applied Physics and Electronics at Umeå University. The project was carried out with the aim of demonstrating that it is possible to build a semi-autonomous transport carrier that can follow behind and next to a person. In extension, it is intended that the concept should be able to be used to streamline companies by enabling automatic transport of goods. The prototype used an electric minivan equipped with a Raspberry Pi 3, laser radar, and an Inertial Measurement Unit. Drivers and algorithms were developed and implemented in the CODESYS development environment. A user interface was created in CODESYS to enable remote control and communication with the carrier. The result is an electric carrier that can track and follow behind and next to a person. In a user interface, the carrier can be remotely controlled with functions such as manual control, switching between manual and autonomous control, swapping between following behind or next to a person, performing a transition in which the car moves from being behind to next to a person, and calibration of steering angle. The interface displays the carrier's speed, steering angle, signaling lamps for collision, distance information about the person the carrier is following, data from the Inertial Measurement Unit, and a 2D-GridMap of the carrier’s environment based on laser radar data. The concept has been validated through four test scenarios: Follow behind a person indoors, follow next to a person, transition from behind to next to a person, and follow behind a person outdoors. The project shows that it is possible to build a prototype of an electric intelligent transport carrier that can follow a person both indoors and outdoors, and that it is an opportunity for companies to invest in intelligent carriers.
- Published
- 2023
21. Prototyp av en intelligent transportvagn
- Author
-
Rasooly, Mikael and Rasooly, Mikael
- Abstract
Denna rapport presenterar en prototyp av en eldriven intelligent transportvagn som konstruerats vid ett projekt vid Institutionen för tillämpad fysik och elektronik på Umeå universitet. Projektet genomfördes i syfte att visa att det går att bygga en semi-autonom transportvagn som kan följa efter bakom och vid sidan av en person. I förlängningen är det tänkt att konceptet ska kunna användas för att effektivisera företag genom att möjliggöra automatisk transport av gods. För prototypen användes en eldriven minibil som utrustades med en Raspberry Pi 3, laser radar, och ett tröghetspaket. För komponenterna och minibilens motorstyrkort användes utvecklingsmiljön CODESYS för att utveckla och implementera drivrutiner och algoritmer. För att möjliggöra fjärrstyrning och kommunikation med transportvagnen skapades ett användargränssnitt i CODESYS. Resultatet är en eldriven transportvagn som kan söka och följa efter bakom och vid sidan av en person. I ett användargränssnitt kan transportvagnen fjärrstyras med funktioner som manuell styrning, växla mellan manuell och autonom styrning, växla mellan att följa bakom eller vid sidan av en person, utföra en transition där vagnen förflyttar sig från att vara bakom till vid sidan av en person, och kalibrera styrvinkel. I gränssnittet ses transportvagnens hastighet, styrvinkel, signallampor för kollision, avståndsinformation om personen som vagnen följer, data från tröghetspaketet, och en rutkarta över transportvagnens omgivning uppbyggt av laser radar-data. Konceptet har validerats via fyra testscenarion: Följa bakom en person inomhus, följa vid sidan av en person, transition från bakom till vid sidan av en person, och följa bakom en person utomhus. Projektet visar att det går att bygga en prototyp på en eldriven intelligent transportvagn som kan följa en person både inomhus och utomhus, och att det är ett tillfälle för företag att investera i intelligenta drivlinor., This report presents a prototype of an electric intelligent transport carrier designed in a project at the Department of Applied Physics and Electronics at Umeå University. The project was carried out with the aim of demonstrating that it is possible to build a semi-autonomous transport carrier that can follow behind and next to a person. In extension, it is intended that the concept should be able to be used to streamline companies by enabling automatic transport of goods. The prototype used an electric minivan equipped with a Raspberry Pi 3, laser radar, and an Inertial Measurement Unit. Drivers and algorithms were developed and implemented in the CODESYS development environment. A user interface was created in CODESYS to enable remote control and communication with the carrier. The result is an electric carrier that can track and follow behind and next to a person. In a user interface, the carrier can be remotely controlled with functions such as manual control, switching between manual and autonomous control, swapping between following behind or next to a person, performing a transition in which the car moves from being behind to next to a person, and calibration of steering angle. The interface displays the carrier's speed, steering angle, signaling lamps for collision, distance information about the person the carrier is following, data from the Inertial Measurement Unit, and a 2D-GridMap of the carrier’s environment based on laser radar data. The concept has been validated through four test scenarios: Follow behind a person indoors, follow next to a person, transition from behind to next to a person, and follow behind a person outdoors. The project shows that it is possible to build a prototype of an electric intelligent transport carrier that can follow a person both indoors and outdoors, and that it is an opportunity for companies to invest in intelligent carriers.
- Published
- 2023
22. SLICT : Multi-Input Multi-Scale Surfel-Based Lidar-Inertial Continuous-Time Odometry and Mapping
- Author
-
Nguyen, Thien-Minh, Duberg, Daniel, Jensfelt, Patric, Yuan, Shenghai, Xie, Lihua, Nguyen, Thien-Minh, Duberg, Daniel, Jensfelt, Patric, Yuan, Shenghai, and Xie, Lihua
- Abstract
While feature association to a global map has significant benefits, to keep the computations from growing exponentially, most lidar-based odometry and mapping methods opt to associate features with local maps at one voxel scale. Taking advantage of the fact that surfels (surface elements) at different voxel scales can be organized in a tree-like structure, we propose an octree-based global map of multi-scale surfels that can be updated incrementally. This alleviates the need for recalculating, for example, a k-d tree of the whole map repeatedly. The system can also take input from a single or a number of sensors, reinforcing the robustness in degenerate cases. We also propose a point-to-surfel (PTS) association scheme, continuous-time optimization on PTS and IMU preintegration factors, along with loop closure and bundle adjustment, making a complete framework for Lidar-Inertial continuous-time odometry and mapping. Experiments on public and in-house datasets demonstrate the advantages of our system compared to other state-of-the-art methods., QC 20230403
- Published
- 2023
- Full Text
- View/download PDF
23. LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With a Globally Optimal Solution
- Author
-
Jiao, Jianhao, Chen, Feiyi, Wei, Hexiang, Wu, Jin, Liu, Ming, Jiao, Jianhao, Chen, Feiyi, Wei, Hexiang, Wu, Jin, and Liu, Ming
- Abstract
The combination of light detection and rangings (LiDARs) and cameras enables a mobile robot to perceive environments with multimodal data, becoming a key factor in achieving robust perception. Traditional frame cameras are sensitive to changing illumination conditions, motivating us to introduce novel event cameras to make LiDAR-camera fusion more complete and robust. However, to jointly exploit these sensors, the challenging extrinsic calibration problem should be addressed. This article proposes an automatic checkerboard-based approach to calibrate extrinsics between a LiDAR and a frame/event camera, where the following four contributions are presented: 1) we present an automatic feature extraction and checkerboard tracking method from LiDAR's point clouds; 2) we reconstruct realistic frame images from event streams, applying traditional corner detectors to event cameras; 3) we propose an initialization-refinement procedure to estimate extrinsics using point-to-plane and point-to-line constraints in a coarse-to-fine manner; 4) we introduce a unified and globally optimal solution to address two optimization problems in calibration. Our approach has been validated with extensive experiments on 19 simulated and real-world datasets and outperforms the state-of-the-art.
- Published
- 2023
24. Sensor Degradation Detection Algorithm for Automated Driving Systems
- Author
-
Darab, Jonathan M., Witcher, Christina J., Darab, Jonathan M., and Witcher, Christina J.
- Abstract
The project developed a sensor degradation detection algorithm for Automated Driving Systems (ADS). Weather, cyberattacks, and sensor malfunction can degrade sensor information, resulting in significant safety issues, such as leading the vehicle off the road or causing a sudden stop in the middle of an intersection. From the Virginia Tech Transportation Institute’s (VTTI’s) Naturalistic Driving Database (NDD), 100 events related to sensor perception were selected to establish baseline sensor performance. VTTI determined performance metrics using these events for comparison in simulation. A virtual framework was used to test degraded sensor states and the detection algorithm’s response. Old Dominion University developed the GPS model and collaborated with the Global Center for Automotive Performance Simulation (GCAPS) to develop the degradation detection algorithm utilizing the DeepPOSE algorithm. GCAPS created the virtual framework, developed the LiDAR and radar sensor models, and executed the simulations. The sensor degradation detection algorithm will aid ADS vehicles in decision making by identifying degraded sensor performance. The detection algorithm achieved 70% accuracy. Additional training methods and adjustments are needed for the accuracy level required for vehicle system implementation. The process of collecting sensor data, creating sensor models, and utilizing simulation for algorithm development are major outcomes of the research.
- Published
- 2023
25. You Only Label Once: 3D Box Adaptation from Point Cloud to Image with Semi-Supervised Learning
- Author
-
Shi, Jieqi, Li, Peiliang, Chen, Xiaozhi, Shen, Shaojie, Shi, Jieqi, Li, Peiliang, Chen, Xiaozhi, and Shen, Shaojie
- Abstract
The image-based 3D object detection task expects that the predicted 3D bounding box has a “tightness” projection (also referred to as cuboid) to facilitate 2D-based training, which fits the object contour well on the image while remaining reasonable on the 3D space. These requirements bring significant challenges to the annotation. Projecting the Lidar-labeled 3D boxes to the image leads to non-trivial misalignment, while directly drawing a cuboid on the image cannot access the original 3D information. In this work, we propose a learning-based 3D box adaptation approach that automatically adjusts minimum parameters of the 360
Lidar 3D bounding box to fit the image appearance of panoramic cameras perfectly. With only a few 2D boxes annotation as guidance during the training phase, our network can produce accurate image-level cuboid annotations with 3D properties from Lidar boxes. We call our method “you only label once”, which means labeling on the point cloud once and automatically adapting to all surrounding cameras. Our refinement balances the accuracy and efficiency well and dramatically reduces the labeling effort for accurate cuboid annotation. Extensive experiments on the public Waymo and NuScenes datasets show that our method can produce human-level cuboid annotation on the image without manual adjustment and can accelerate monocular-3D training tasks. IEEE$^{\circ }$ - Published
- 2023
26. Object-level Semantic and Velocity Feedback for Dynamic Occupancy Grids
- Author
-
Ministerio de Ciencia e Innovación (España), Comunidad de Madrid, Jiménez, Víctor [0000-0003-1197-0937], Godoy, Jorge [0000-0002-3132-5348], Villagrá, Jorge [0000-0002-3963-7952], Artuñedo, Antonio [0000-0003-2161-9876], Jiménez, Víctor, Godoy, Jorge, Villagrá, Jorge, Artuñedo, Antonio, Ministerio de Ciencia e Innovación (España), Comunidad de Madrid, Jiménez, Víctor [0000-0003-1197-0937], Godoy, Jorge [0000-0002-3132-5348], Villagrá, Jorge [0000-0002-3963-7952], Artuñedo, Antonio [0000-0003-2161-9876], Jiménez, Víctor, Godoy, Jorge, Villagrá, Jorge, and Artuñedo, Antonio
- Abstract
LiDAR-based frameworks combining dynamic occupancy grids and object-level tracking are a popular approach for perception of the environment in autonomous driving applications. This paper presents a novel backchannel from the object-level module to the grid-level module that procures the enhancement of overall performance. This feedback leads to an enhanced grid representation by the inclusion of two new steps that allow semantic classification of the occupied space and the improvement of the dynamic estimation. To this end, objects extracted from the grid are analyzed with respect to potential object classes and displacement. Class likelihoods are filtered over time at cell-level using particles and a naive Bayesian classifier. The displacement information is computed taking into account semantic information and comparing objects in consecutive frames. Then, it is used to obtain velocity measurements that are used to enhance grid's dynamic estimation. In contrast to other approaches in the literature seeking similar objectives, this proposal does not rely on additional sensing technologies or neural networks. The evaluation is conducted with real sensor data in challenging urban scenarios.
- Published
- 2023
27. Detecting darting out pedestrians with occlusion aware sensor fusion of radar and stereo camera
- Author
-
Palffy, A. (author), Kooij, J.F.P. (author), Gavrila, D. (author), Palffy, A. (author), Kooij, J.F.P. (author), and Gavrila, D. (author)
- Abstract
Early and accurate detection of crossing pedestrians is crucial in automated driving in order to perform timely emergency manoeuvres. However, this is a difficult task in urban scenarios where pedestrians are often occluded (not visible) behind objects, e.g., other parked vehicles. We propose an occlusion aware fusion of stereo camera and radar sensors to address scenarios with crossing pedestrians behind such parked vehicles. Our proposed method adapts both the expected rate and properties of detections in different areas according to the visibility of the sensors. In our experiments on a real-world dataset, we show that the proposed occlusion aware fusion of radar and stereo camera detects the crossing pedestrians on average 0.26 seconds earlier than using the camera alone, and 0.15 seconds earlier than fusing the sensors without occlusion information. Our dataset containing 501 relevant recordings of pedestrians behind vehicles will be publicly available on our website for non-commercial, scientific use., Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public., Intelligent Vehicles
- Published
- 2023
- Full Text
- View/download PDF
28. EmPointMovSeg: Sparse Tensor-Based Moving-Object Segmentation in 3-D LiDAR Point Clouds for Autonomous Driving-Embedded System
- Author
-
He, Zhijian, Fan, Xueli, Peng, Yun, Shen, Zhaoyan, Jiao, Jianhao, Liu, Ming, He, Zhijian, Fan, Xueli, Peng, Yun, Shen, Zhaoyan, Jiao, Jianhao, and Liu, Ming
- Abstract
Object segmentation is a per-pixel label prediction task which targets at providing context analysis for autonomous driving. Moving object segmentation (MOS) serves as a subbranch of object segmentation, targeting at separating the surrounding objects into binary options: dynamic and static. MOS is vital for safety-critical task in autonomous driving because dynamic objects are often true potential threat to self-driving car comparing to static ones. Current methods typically address the MOS problem as a category feature to label mapping task, which is not rational in reality. For example, a parking car should be considered as static instead of moving object category. There is little systematic theory to differentiate object moving characteristics from non-moving characteristics in MOS. Furthermore, restricted by limit resource in embedded system, MOS is often in an off-line manner due to huge computational requirement. An on-line and low computational cost MOS is an urgent demand for practical safety-critical mission which takes immediate reaction as compulsory. In this paper, we propose EmPointMovseg, an efficient and practical 3D LiDAR MOS solution for autonomous driving. Leveraging the power of well-adapted autoregressive system identification (AR-SI) theory, EmPointMovseg theoretically explains moving-object feature in large scale 3D LiDAR semantic segmentation. An end-to-end sparse tensor based CNN which balances segmentation accuracy and on-line process ability is proposed. We construct our experiment on both representative dataset benchmarks and practical embedded systems. The evaluation result shows the effectiveness and accuracy of our proposed solution, conquering the bottleneck in the on-line large-scale 3D LiDAR semantic segmentation. IEEE
- Published
- 2023
29. Bigdata Analytics and Artificial Intelligence for Smart Intersections [Summary]
- Published
- 2023
30. Development of Latitude/Longitude (and Route/Milepost) Positioning Traffic Management Cameras
- Published
- 2023
31. LIDAR Placement Optimization Using a Multi-Criteria Approach
- Published
- 2023
32. Investigation of AV Operational Issues Using Simulation Equipment
- Published
- 2023
33. Advancing Accelerated Testing Protocols for Safe and Reliable Deployment of Connected and Automated Vehicles Through Iterative Deployment in Physical and Digital Worlds [supporting dataset]
- Subjects
- California., Californie.
- Abstract
As Automated Vehicles diffuse through the transportation system, it is important to understand their safety performance. Although few AV-involved crashes have occurred on roads during testing, they pose new challenges and opportunities for improving safety. The challenges come from using complex automation technologies operating at high speeds to make lateral and longitudinal control decisions, increasing the chances of software and hardware failure. Are vehicles with lower or higher automation safe enough to drive on public roads, and more fundamentally, how do we assess their safety envelope? At the same time, there are opportunities to understand AV-involved crashes by leveraging newly available AV data. In this CSCRS project (reporting on Year 1 activities), we take steps toward developing testing procedures for connected and automated vehicles by using a novel software and physical deployment platform which allows rapid iterative development.
- Published
- 2022
34. A Novel Coding Architecture for Multi-Line LiDAR Point Clouds Based on Clustering and Convolutional LSTM Network
- Author
-
Sun, Xuebin, Wang, Sukai, Liu, Ming, Sun, Xuebin, Wang, Sukai, and Liu, Ming
- Abstract
Light detection and ranging (LiDAR) plays an indispensable role in autonomous driving technologies, such as localization, map building, navigation and object avoidance. However, due to the vast amount of data, transmission and storage could become an important bottleneck. In this article, we propose a novel compression architecture for multi-line LiDAR point cloud sequences based on clustering and convolutional long short-term memory (LSTM) networks. LiDAR point clouds are structured, which provides an opportunity to convert the 3D data to 2D array, represented as range images. Thus, we cast the 3D point clouds compression as a range image sequence compression problem. Inspired by the high efficiency video coding (HEVC) algorithm, we design a novel compression framework for LiDAR data that includes two main techniques: intra-prediction and inter-prediction. For intra-frames, inspired by the depth modeling modes (DMM) adopted in 3D-HEVC, we develop a clustering-based intra-prediction technique, which can utilize the spatial structure characteristics of point clouds to remove the spatial redundancy. For inter-frames, we design a prediction network model using convolutional LSTM cells. The network model is capable of predicting future inter-frames using the encoded intra-frames. As a result, temporal redundancy can be removed. Experiments on the KITTI dataset demonstrate that the proposed method achieves an impressive compression ratio (CR), with 4.10% at millimeter precision, which means the point clouds can compress to nearly 1/25 of their original size. Additionally, compared with the well-known octree, Google Draco, and MPEG TMC13 methods, our algorithm yields better performance in compression ratio.
- Published
- 2022
35. Multi-class Road User Detection with 3+1D Radar in the View-of-Delft Dataset
- Author
-
Palffy, A. (author), Pool, E.A.I. (author), Baratam, Srimannarayana (author), Kooij, J.F.P. (author), Gavrila, D. (author), Palffy, A. (author), Pool, E.A.I. (author), Baratam, Srimannarayana (author), Kooij, J.F.P. (author), and Gavrila, D. (author)
- Abstract
Next-generation automotive radars provide elevation data in addition to range-, azimuth- and Doppler velocity. In this experimental study, we apply a state-of-the-art object detector (PointPillars), previously used for LiDAR 3D data, to such 3+1D radar data (where 1D refers to Doppler). In ablation studies, we first explore the benefits of the additional elevation information, together with that of Doppler, radar cross section and temporal accumulation, in the context of multi-class road user detection. We subsequently compare object detection performance on the radar and LiDAR point clouds, object class-wise and as a function of distance. To facilitate our experimental study, we present the novel View-of-Delft (VoD) automotive dataset. It contains 8693 frames of synchronized and calibrated 64-layer LiDAR-, (stereo) camera-, and 3+1D radar-data acquired in complex, urban traffic. It consists of 123106 3D bounding box annotations of both moving and static objects, including 26587 pedestrian, 10800 cyclist and 26949 car labels. Our results show that object detection on 64-layer LiDAR data still outperforms that on 3+1D radar data, but the addition of elevation information and integration of successive radar scans helps close the gap. The VoD dataset is made freely available for scientific benchmarking., Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public., Intelligent Vehicles
- Published
- 2022
- Full Text
- View/download PDF
36. Situation-Aware Drivable Space Estimation for Automated Driving
- Author
-
Muñoz Sánchez, Manuel, Pogosov, Denis, Silvas, Emilia, Mocanu, Decebal C., Elfring, Jos, van de Molengraft, M.J.G., Muñoz Sánchez, Manuel, Pogosov, Denis, Silvas, Emilia, Mocanu, Decebal C., Elfring, Jos, and van de Molengraft, M.J.G.
- Abstract
An automated vehicle (AV) must always have a correct representation of the drivable space to position itself accurately and operate safely. To determine the drivable space, current research focuses on single sources of information, either using pre-computed high-definition maps, or mapping the environment online with sensors such as LiDARs or cameras. However, each of these information sources can fail, some are too costly, and maps could be outdated. In this work a new method for situation-aware drivable space (SDS) estimation combining multiple information sources is proposed, which is also suitable for AVs equipped with inexpensive sensors. Depending on the situation, semantic information of sensed objects is combined with domain knowledge to estimate the drivability of the space surrounding each object (e.g. traffic light, another vehicle). These estimates are modeled as probabilistic graphs to account for the uncertainty of information sources, and an optimal spatial configuration of their elements is determined via graph-based simultaneous localization and mapping (SLAM). To investigate the robustness of SDS towards potentially unreliable sensors and maps, it has been tested in a simulation environment and real world data. Results on different use cases (e.g. straight roads, curved roads, and intersections) show considerable robustness towards unreliable inputs, and the recovered drivable space allows for accurate in-lane localization of the AV even in extreme cases where no prior knowledge of the road network is available.
- Published
- 2022
37. The Scanner of Heterogeneous Traffic Flow in Smart Cities by an Updating Model of Connected and Automated Vehicles
- Author
-
Chen, Dongliang, Huang, Hongyong, Zheng, Yuchao, Gawkowski, Piotr, Lv, Haibin, Lv, Zhihan, Chen, Dongliang, Huang, Hongyong, Zheng, Yuchao, Gawkowski, Piotr, Lv, Haibin, and Lv, Zhihan
- Abstract
The problems of traditional traffic flow detection and calculation methods include limited traffic scenes, high system costs, and lower efficiency over detecting and calculating. Therefore, in this paper, we presented the updating Connected and Automated Vehicles (CAVs) model as the scanner of heterogeneous traffic flow, which uses various sensors to detect the characteristics of traffic flow in several traffic scenes on the roads. The model contains the hardware platform, software algorithm of CAV, and the analysis of traffic flow detection and simulation by Flow Project, where the driving of vehicles is mainly controlled by Reinforcement Learning (RL). Finally, the effectiveness of the proposed model and the corresponding swarm intelligence strategy is evaluated through simulation experiments. The results showed that the traffic flow scanning, tracking, and data recording performed continuously by CAVs are effective. The increase in the penetration rate of CAVs in the overall traffic flow has a significant effect on vehicle detection and identification. In addition, the vehicle occlusion rate is independent of the CAV lane position in all cases. The complete street scanner is a new technology that realizes the perception of the human settlement environment with the help of the Internet of Vehicles based on 5G communications and sensors. Although there are some shortcomings in the experiment, it still provides an experimental reference for the development of smart vehicles.
- Published
- 2022
- Full Text
- View/download PDF
38. Object-based Velocity Feedback for Dynamic Occupancy Grids
- Author
-
European Commission, Comunidad de Madrid, Ministerio de Ciencia e Innovación (España), Jiménez, Víctor [0000-0003-1197-0937], Godoy, Jorge [0000-0002-3132-5348], Artuñedo, Antonio [0000-0003-2161-9876], Villagrá, Jorge [0000-0002-3963-7952], Jiménez, Victor, Godoy, Jorge, Artuñedo, Antonio, Villagrá, Jorge, European Commission, Comunidad de Madrid, Ministerio de Ciencia e Innovación (España), Jiménez, Víctor [0000-0003-1197-0937], Godoy, Jorge [0000-0002-3132-5348], Artuñedo, Antonio [0000-0003-2161-9876], Villagrá, Jorge [0000-0002-3963-7952], Jiménez, Victor, Godoy, Jorge, Artuñedo, Antonio, and Villagrá, Jorge
- Abstract
Dynamic occupancy grids (DOGs) have raised interest in the last years due to their ability to fuse information without explicit data association, to represent free space and arbitrary-shape objects and to estimate obstacles’ dynamics. Different works have presented strategies with demonstrated good performance. Most of them rely on LiDAR sensors, and some have shown that including additional velocity measurements enhance the estimation. This work aims at showing that velocity information can be directly inferred from objects displacement. Thus, a strategy using velocity feedback and its inclusion in the DOG is presented. The qualitative and quantitative analysis of results obtained from real data experimentation show a very good performance, specially in dynamic changing situations.
- Published
- 2022
39. 3D-Printed Fluorescence Hyperspectral Lidar for Monitoring Tagged Insects
- Author
-
Manefjord, Hampus, Muller, Lauro, Li, Meng, Salvador, Jacobo, Blomqvist, Sofia, Runemark, Anna, Kirkeby, Carsten, Ignell, Rickard, Bood, Joakim, Brydegaard, Mikkel, Manefjord, Hampus, Muller, Lauro, Li, Meng, Salvador, Jacobo, Blomqvist, Sofia, Runemark, Anna, Kirkeby, Carsten, Ignell, Rickard, Bood, Joakim, and Brydegaard, Mikkel
- Abstract
Insects play crucial roles in ecosystems, and how they disperse within their habitat has significant implications for our daily life. Examples include foraging ranges for pollinators, as well as the spread of disease vectors and pests. Despite technological advances with radio tags, isotopes, and genetic sequencing, insect dispersal and migration range remain challenging to study. The gold standard method of mark-recapture is tedious and inefficient. This paper demonstrates the construction of a compact, inexpensive hyperspectral fluorescence lidar. The system is based on off-the-shelf components and 3D printing. After evaluating the performance of the instrument in the laboratory, we demonstrate its efficient range-resolved fluorescence spectra in situ. We present daytime remote ranging and fluorescent identification of auto-powder-tagged honey bees. We also showcase range-, temporally- and spectrally-resolved free-flying mosquitoes, which were tagged through feeding on fluorescent-dyed sugar water. We conclude that violet light can efficiently excite administered sugar meals imbibed by flying insects. Our field experiences provide realistic expectations of signal-to-noise levels, which can be used in future studies. The technique is generally applicable and can efficiently monitor several tagged insect groups in parallel for comparative ecological analysis. This technique opens up a range of ecological experiments, which were previously unfeasible.
- Published
- 2022
40. 3D-Printed Fluorescence Hyperspectral Lidar for Monitoring Tagged Insects
- Author
-
Manefjord, Hampus, Muller, Lauro, Li, Meng, Salvador, Jacobo, Blomqvist, Sofia, Runemark, Anna, Kirkeby, Carsten, Ignell, Rickard, Bood, Joakim, Brydegaard, Mikkel, Manefjord, Hampus, Muller, Lauro, Li, Meng, Salvador, Jacobo, Blomqvist, Sofia, Runemark, Anna, Kirkeby, Carsten, Ignell, Rickard, Bood, Joakim, and Brydegaard, Mikkel
- Abstract
Insects play crucial roles in ecosystems, and how they disperse within their habitat has significant implications for our daily life. Examples include foraging ranges for pollinators, as well as the spread of disease vectors and pests. Despite technological advances with radio tags, isotopes, and genetic sequencing, insect dispersal and migration range remain challenging to study. The gold standard method of mark-recapture is tedious and inefficient. This paper demonstrates the construction of a compact, inexpensive hyperspectral fluorescence lidar. The system is based on off-the-shelf components and 3D printing. After evaluating the performance of the instrument in the laboratory, we demonstrate its efficient range-resolved fluorescence spectra in situ. We present daytime remote ranging and fluorescent identification of auto-powder-tagged honey bees. We also showcase range-, temporally- and spectrally-resolved free-flying mosquitoes, which were tagged through feeding on fluorescent-dyed sugar water. We conclude that violet light can efficiently excite administered sugar meals imbibed by flying insects. Our field experiences provide realistic expectations of signal-to-noise levels, which can be used in future studies. The technique is generally applicable and can efficiently monitor several tagged insect groups in parallel for comparative ecological analysis. This technique opens up a range of ecological experiments, which were previously unfeasible.
- Published
- 2022
41. A Novel Coding Architecture for Multi-Line LiDAR Point Clouds Based on Clustering and Convolutional LSTM Network
- Author
-
Sun, Xuebin, Wang, Sukai, Liu, Ming, Sun, Xuebin, Wang, Sukai, and Liu, Ming
- Abstract
Light detection and ranging (LiDAR) plays an indispensable role in autonomous driving technologies, such as localization, map building, navigation and object avoidance. However, due to the vast amount of data, transmission and storage could become an important bottleneck. In this article, we propose a novel compression architecture for multi-line LiDAR point cloud sequences based on clustering and convolutional long short-term memory (LSTM) networks. LiDAR point clouds are structured, which provides an opportunity to convert the 3D data to 2D array, represented as range images. Thus, we cast the 3D point clouds compression as a range image sequence compression problem. Inspired by the high efficiency video coding (HEVC) algorithm, we design a novel compression framework for LiDAR data that includes two main techniques: intra-prediction and inter-prediction. For intra-frames, inspired by the depth modeling modes (DMM) adopted in 3D-HEVC, we develop a clustering-based intra-prediction technique, which can utilize the spatial structure characteristics of point clouds to remove the spatial redundancy. For inter-frames, we design a prediction network model using convolutional LSTM cells. The network model is capable of predicting future inter-frames using the encoded intra-frames. As a result, temporal redundancy can be removed. Experiments on the KITTI dataset demonstrate that the proposed method achieves an impressive compression ratio (CR), with 4.10% at millimeter precision, which means the point clouds can compress to nearly 1/25 of their original size. Additionally, compared with the well-known octree, Google Draco, and MPEG TMC13 methods, our algorithm yields better performance in compression ratio.
- Published
- 2022
42. TM3Loc: Tightly-Coupled Monocular Map Matching for High Precision Vehicle Localization
- Author
-
Wen, Tuopu, Jiang, Kun, Wijaya, Benny, Li, Hangyu, Yang, Mengmeng, Yang, Diange, Wen, Tuopu, Jiang, Kun, Wijaya, Benny, Li, Hangyu, Yang, Mengmeng, and Yang, Diange
- Abstract
Vision-based map-matching with HD map for high precision vehicle localization has gained great attention for its low-cost and ease of deployment. However, its localization performance is still unsatisfactory in accuracy and robustness in numerous real applications due to the sparsity and noise of the perceived HD map landmarks. This article proposes the tightly-coupled monocular map-matching localization algorithm (TM(3)Loc) for monocular-based vehicle localization. TM(3)Loc introduces semantic chamfer matching (SCM) to model monocular map-matching problem and combines visual features with SCM in a tightly-coupled manner. By applying the sliding window-based optimization technique, the historical visual features and HD map constraints are also introduced, such that the vehicle poses are estimated with an abundance of visual features and multi-frame HD map landmark features, rather than with single-frame HD map observations in previous works. Experiments are conducted on large scale dataset of 15 km long in total. The results show that TM(3)Loc is able to achieve high precision localization performance using a low-cost monocular camera, largely exceeding the performance of the previous state-of-the-art methods, thereby promoting the development of autonomous driving.
- Published
- 2022
43. A novel coding scheme for large-scale point cloud sequences based on clustering and registration
- Author
-
Sun, Xuebin, Sun, Yuxiang, Zuo, Weixun, Cheng, Shing Shin, Liu, Ming, Sun, Xuebin, Sun, Yuxiang, Zuo, Weixun, Cheng, Shing Shin, and Liu, Ming
- Abstract
Due to the huge volume of point cloud data, storing and transmitting it is currently difficult and expensive in autonomous driving. Learning from the high-efficiency video coding (HEVC) framework, we propose a novel compression scheme for large-scale point cloud sequences, in which several techniques have been developed to remove the spatial and temporal redundancy. The proposed strategy consists mainly of three parts: intracoding, intercoding, and residual data coding. For intracoding, inspired by the depth modeling modes (DMMs), in 3-D HEVC (3-D-HEVC), a cluster-based prediction method is proposed to remove the spatial redundancy. For intercoding, a point cloud registration algorithm is utilized to transform two adjacent point clouds into the same coordinate system. By calculating the residual map of their corresponding depth image, the temporal redundancy can be removed. Finally, the residual data are compressed either by lossless or lossy methods. Our approach can deal with multiple types of point cloud data, from simple to more complex. The lossless method can compress the point cloud data to 3.63% of its original size by intracoding and 2.99% by intercoding without distance distortion. Experiments on the KITTI dataset also demonstrate that our method yields better performance compared with recent well-known methods.
- Published
- 2022
44. Situation-Aware Drivable Space Estimation for Automated Driving
- Author
-
Muñoz Sánchez, Manuel, Pogosov, Denis, Silvas, Emilia, Mocanu, Decebal C., Elfring, Jos, van de Molengraft, M.J.G., Muñoz Sánchez, Manuel, Pogosov, Denis, Silvas, Emilia, Mocanu, Decebal C., Elfring, Jos, and van de Molengraft, M.J.G.
- Abstract
An automated vehicle (AV) must always have a correct representation of the drivable space to position itself accurately and operate safely. To determine the drivable space, current research focuses on single sources of information, either using pre-computed high-definition maps, or mapping the environment online with sensors such as LiDARs or cameras. However, each of these information sources can fail, some are too costly, and maps could be outdated. In this work a new method for situation-aware drivable space (SDS) estimation combining multiple information sources is proposed, which is also suitable for AVs equipped with inexpensive sensors. Depending on the situation, semantic information of sensed objects is combined with domain knowledge to estimate the drivability of the space surrounding each object (e.g. traffic light, another vehicle). These estimates are modeled as probabilistic graphs to account for the uncertainty of information sources, and an optimal spatial configuration of their elements is determined via graph-based simultaneous localization and mapping (SLAM). To investigate the robustness of SDS towards potentially unreliable sensors and maps, it has been tested in a simulation environment and real world data. Results on different use cases (e.g. straight roads, curved roads, and intersections) show considerable robustness towards unreliable inputs, and the recovered drivable space allows for accurate in-lane localization of the AV even in extreme cases where no prior knowledge of the road network is available.
- Published
- 2022
45. Semantic Scene Completion using Local Deep Implicit Functions on LiDAR Data
- Author
-
Rist, C.B. (author), Emmerichs, David (author), Enzweiler, Markus (author), Gavrila, D. (author), Rist, C.B. (author), Emmerichs, David (author), Enzweiler, Markus (author), and Gavrila, D. (author)
- Abstract
Semantic scene completion is the task of jointly estimating 3D geometry and semantics of objects and surfaces within a given extent. This is a particularly challenging task on real-world data that is sparse and occluded. We propose a scene segmentation network based on local Deep Implicit Functions as a novel learning-based method for scene completion. Unlike previous work on scene completion, our method produces a continuous scene representation that is not based on voxelization. We encode raw point clouds into a latent space locally and at multiple spatial resolutions. A global scene completion function is subsequently assembled from the localized function patches. We show that this continuous representation is suitable to encode geometric and semantic properties of extensive outdoor scenes without the need for spatial discretization (thus avoiding the trade-off between level of scene detail and the scene extent that can be covered). We train and evaluate our method on semantically annotated LiDAR scans from the Semantic KITTI dataset. Our experiments verify that our method generates a powerful representation that can be decoded into a dense 3D description of a given scene. The performance of our method surpasses the state of the art on the Semantic KITTI Scene Completion Benchmark in terms of geometric completion intersection-over-union (IoU)., Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public., Intelligent Vehicles
- Published
- 2022
- Full Text
- View/download PDF
46. Multi-class Road User Detection with 3+1D Radar in the View-of-Delft Dataset
- Author
-
Palffy, A. (author), Pool, E.A.I. (author), Baratam, Srimannarayana (author), Kooij, J.F.P. (author), Gavrila, D. (author), Palffy, A. (author), Pool, E.A.I. (author), Baratam, Srimannarayana (author), Kooij, J.F.P. (author), and Gavrila, D. (author)
- Abstract
Next-generation automotive radars provide elevation data in addition to range-, azimuth- and Doppler velocity. In this experimental study, we apply a state-of-the-art object detector (PointPillars), previously used for LiDAR 3D data, to such 3+1D radar data (where 1D refers to Doppler). In ablation studies, we first explore the benefits of the additional elevation information, together with that of Doppler, radar cross section and temporal accumulation, in the context of multi-class road user detection. We subsequently compare object detection performance on the radar and LiDAR point clouds, object class-wise and as a function of distance. To facilitate our experimental study, we present the novel View-of-Delft (VoD) automotive dataset. It contains 8693 frames of synchronized and calibrated 64-layer LiDAR-, (stereo) camera-, and 3+1D radar-data acquired in complex, urban traffic. It consists of 123106 3D bounding box annotations of both moving and static objects, including 26587 pedestrian, 10800 cyclist and 26949 car labels. Our results show that object detection on 64-layer LiDAR data still outperforms that on 3+1D radar data, but the addition of elevation information and integration of successive radar scans helps close the gap. The VoD dataset is made freely available for scientific benchmarking., Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public., Intelligent Vehicles
- Published
- 2022
- Full Text
- View/download PDF
47. Bigdata Analytics and Artificial Intelligence for Smart Intersections
- Published
- 2022
48. Utilizing LiDAR Sensors to Detect Pedestrian Movements at Signalized Intersections
- Published
- 2022
49. Data-Importance-Aware Bandwidth-Allocation Scheme for Point-Cloud Transmission in Multiple LIDAR Sensors
- Author
-
40793279, 70524156, Otsu, Ryo, Shinkuma, Ryoichi, Sato, Takehiro, Oki, Eiji, 40793279, 70524156, Otsu, Ryo, Shinkuma, Ryoichi, Sato, Takehiro, and Oki, Eiji
- Abstract
This paper addresses bandwidth allocation to multiple light detection and ranging (LIDAR) sensors for smart monitoring, which a limited communication capacity is available to transmit a large volume of point-cloud data from the sensors to an edge server in real time. To deal with the limited capacity of the communication channel, we propose a bandwidth-allocation scheme that assigns multiple point-cloud compression formats to each LIDAR sensor in accordance with the spatial importance of the point-cloud data transmitted by the sensor. Spatial importance is determined by estimating how objects, such as cars, trucks, bikes, and pedestrians, are likely to exist since regions where objects are more likely to exist are more useful for smart monitoring. A numerical study using a real point-cloud dataset obtained at an intersection indicates that the proposed scheme is superior to the benchmarks in terms of the distributions of data volumes among LIDAR sensors and quality of point-cloud data received by the edge server.
- Published
- 2021
50. A Novel Coding Scheme for Large-Scale Point Cloud Sequences Based on Clustering and Registration
- Author
-
Sun, Xuebin, Sun, Yuxiang, Zuo, Weixun, Cheng, Shing Shin, Liu, Ming, Sun, Xuebin, Sun, Yuxiang, Zuo, Weixun, Cheng, Shing Shin, and Liu, Ming
- Abstract
Due to the huge volume of point cloud data, storing and transmitting it is currently difficult and expensive in autonomous driving. Learning from the high-efficiency video coding (HEVC) framework, we propose a novel compression scheme for large-scale point cloud sequences, in which several techniques have been developed to remove the spatial and temporal redundancy. The proposed strategy consists mainly of three parts: intracoding, intercoding, and residual data coding. For intracoding, inspired by the depth modeling modes (DMMs), in 3-D HEVC (3-D-HEVC), a cluster-based prediction method is proposed to remove the spatial redundancy. For intercoding, a point cloud registration algorithm is utilized to transform two adjacent point clouds into the same coordinate system. By calculating the residual map of their corresponding depth image, the temporal redundancy can be removed. Finally, the residual data are compressed either by lossless or lossy methods. Our approach can deal with multiple types of point cloud data, from simple to more complex. The lossless method can compress the point cloud data to 3.63% of its original size by intracoding and 2.99% by intercoding without distance distortion. Experiments on the KITTI dataset also demonstrate that our method yields better performance compared with recent well-known methods. IEEE
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.