Back to Search Start Over

PanoSSC: Exploring Monocular Panoptic 3D Scene Reconstruction for Autonomous Driving

Authors :
Shi, Yining
Li, Jiusi
Jiang, Kun
Wang, Ke
Wang, Yunlong
Yang, Mengmeng
Yang, Diange
Publication Year :
2024

Abstract

Vision-centric occupancy networks, which represent the surrounding environment with uniform voxels with semantics, have become a new trend for safe driving of camera-only autonomous driving perception systems, as they are able to detect obstacles regardless of their shape and occlusion. Modern occupancy networks mainly focus on reconstructing visible voxels from object surfaces with voxel-wise semantic prediction. Usually, they suffer from inconsistent predictions of one object and mixed predictions for adjacent objects. These confusions may harm the safety of downstream planning modules. To this end, we investigate panoptic segmentation on 3D voxel scenarios and propose an instance-aware occupancy network, PanoSSC. We predict foreground objects and backgrounds separately and merge both in post-processing. For foreground instance grouping, we propose a novel 3D instance mask decoder that can efficiently extract individual objects. we unify geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into PanoSSC framework and propose new metrics for evaluating panoptic voxels. Extensive experiments show that our method achieves competitive results on SemanticKITTI semantic scene completion benchmark.<br />Comment: 3dv2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.07037
Document Type :
Working Paper