Back to Search Start Over

GMNet: Graded-Feature Multilabel-Learning Network for RGB-Thermal Urban Scene Semantic Segmentation.

Authors :
Zhou, Wujie
Liu, Jinfu
Lei, Jingsheng
Yu, Lu
Hwang, Jenq-Neng
Source :
IEEE Transactions on Image Processing; 2021, Vol. 30, p7790-7802, 13p
Publication Year :
2021

Abstract

Semantic segmentation is a fundamental task in computer vision, and it has various applications in fields such as robotic sensing, video surveillance, and autonomous driving. A major research topic in urban road semantic segmentation is the proper integration and use of cross-modal information for fusion. Here, we attempt to leverage inherent multimodal information and acquire graded features to develop a novel multilabel-learning network for RGB-thermal urban scene semantic segmentation. Specifically, we propose a strategy for graded-feature extraction to split multilevel features into junior, intermediate, and senior levels. Then, we integrate RGB and thermal modalities with two distinct fusion modules, namely a shallow feature fusion module and deep feature fusion module for junior and senior features. Finally, we use multilabel supervision to optimize the network in terms of semantic, binary, and boundary characteristics. Experimental results confirm that the proposed architecture, the graded-feature multilabel-learning network, outperforms state-of-the-art methods for urban scene semantic segmentation, and it can be generalized to depth data. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
30
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
170077940
Full Text :
https://doi.org/10.1109/TIP.2021.3109518