Back to Search Start Over

Quality-aware Selective Fusion Network for V-D-T Salient Object Detection

Authors :
Bao, Liuxin
Zhou, Xiaofei
Lu, Xiankai
Sun, Yaoqi
Yin, Haibing
Hu, Zhenghui
Zhang, Jiyong
Yan, Chenggang
Publication Year :
2024

Abstract

Depth images and thermal images contain the spatial geometry information and surface temperature information, which can act as complementary information for the RGB modality. However, the quality of the depth and thermal images is often unreliable in some challenging scenarios, which will result in the performance degradation of the two-modal based salient object detection (SOD). Meanwhile, some researchers pay attention to the triple-modal SOD task, where they attempt to explore the complementarity of the RGB image, the depth image, and the thermal image. However, existing triple-modal SOD methods fail to perceive the quality of depth maps and thermal images, which leads to performance degradation when dealing with scenes with low-quality depth and thermal images. Therefore, we propose a quality-aware selective fusion network (QSF-Net) to conduct VDT salient object detection, which contains three subnets including the initial feature extraction subnet, the quality-aware region selection subnet, and the region-guided selective fusion subnet. Firstly, except for extracting features, the initial feature extraction subnet can generate a preliminary prediction map from each modality via a shrinkage pyramid architecture. Then, we design the weakly-supervised quality-aware region selection subnet to generate the quality-aware maps. Concretely, we first find the high-quality and low-quality regions by using the preliminary predictions, which further constitute the pseudo label that can be used to train this subnet. Finally, the region-guided selective fusion subnet purifies the initial features under the guidance of the quality-aware maps, and then fuses the triple-modal features and refines the edge details of prediction maps through the intra-modality and inter-modality attention (IIA) module and the edge refinement (ER) module, respectively. Extensive experiments are performed on VDT-2048<br />Comment: Accepted by IEEE Transactions on Image Processing (TIP)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.07655
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TIP.2024.3393365