Back to Search Start Over

S$^3$M-Net: Joint Learning of Semantic Segmentation and Stereo Matching for Autonomous Driving

Authors :
Wu, Zhiyuan
Feng, Yi
Liu, Chuang-Wei
Yu, Fisher
Chen, Qijun
Fan, Rui
Publication Year :
2024

Abstract

Semantic segmentation and stereo matching are two essential components of 3D environmental perception systems for autonomous driving. Nevertheless, conventional approaches often address these two problems independently, employing separate models for each task. This approach poses practical limitations in real-world scenarios, particularly when computational resources are scarce or real-time performance is imperative. Hence, in this article, we introduce S$^3$M-Net, a novel joint learning framework developed to perform semantic segmentation and stereo matching simultaneously. Specifically, S$^3$M-Net shares the features extracted from RGB images between both tasks, resulting in an improved overall scene understanding capability. This feature sharing process is realized using a feature fusion adaption (FFA) module, which effectively transforms the shared features into semantic space and subsequently fuses them with the encoded disparity features. The entire joint learning framework is trained by minimizing a novel semantic consistency-guided (SCG) loss, which places emphasis on the structural consistency in both tasks. Extensive experimental results conducted on the vKITTI2 and KITTI datasets demonstrate the effectiveness of our proposed joint learning framework and its superior performance compared to other state-of-the-art single-task networks. Our project webpage is accessible at mias.group/S3M-Net.<br />Comment: accepted to IEEE Trans. on Intelligent Vehicles (T-IV)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.11414
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TIV.2024.3357056