Back to Search Start Over

PillarGrid: Deep Learning-based Cooperative Perception for 3D Object Detection from Onboard-Roadside LiDAR

Authors :
Bai, Zhengwei
Wu, Guoyuan
Barth, Matthew J.
Liu, Yongkang
Sisbot, Emrah Akin
Oguchi, Kentaro
Publication Year :
2022

Abstract

3D object detection plays a fundamental role in enabling autonomous driving, which is regarded as the significant key to unlocking the bottleneck of contemporary transportation systems from the perspectives of safety, mobility, and sustainability. Most of the state-of-the-art (SOTA) object detection methods from point clouds are developed based on a single onboard LiDAR, whose performance will be inevitably limited by the range and occlusion, especially in dense traffic scenarios. In this paper, we propose \textit{PillarGrid}, a novel cooperative perception method fusing information from multiple 3D LiDARs (both on-board and roadside), to enhance the situation awareness for connected and automated vehicles (CAVs). PillarGrid consists of four main phases: 1) cooperative preprocessing of point clouds, 2) pillar-wise voxelization and feature extraction, 3) grid-wise deep fusion of features from multiple sensors, and 4) convolutional neural network (CNN)-based augmented 3D object detection. A novel cooperative perception platform is developed for model training and testing. Extensive experimentation shows that PillarGrid outperforms the SOTA single-LiDAR-based 3D object detection methods with respect to both accuracy and range by a large margin.<br />Comment: Submitted to The 25th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2022)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.06319
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/ITSC55140.2022.9921947