Back to Search Start Over

Cross-Modal Adaptive Interaction Network for RGB-D Saliency Detection.

Authors :
Du, Qinsheng
Bian, Yingxu
Wu, Jianyu
Zhang, Shiyan
Zhao, Jian
Source :
Applied Sciences (2076-3417); Sep2024, Vol. 14 Issue 17, p7440, 17p
Publication Year :
2024

Abstract

The salient object detection (SOD) task aims to automatically detect the most prominent areas observed by the human eye in an image. Since RGB images and depth images contain different information, how to effectively integrate cross-modal features in the RGB-D SOD task remains a major challenge. Therefore, this paper proposes a cross-modal adaptive interaction network (CMANet) for the RGB-D salient object detection task, which consists of a cross-modal feature integration module (CMF) and an adaptive feature fusion module (AFFM). These modules are designed to integrate and enhance multi-scale features from both modalities, improve the effect of integrating cross-modal complementary information of RGB and depth images, enhance feature information, and generate richer and more representative feature maps. Extensive experiments were conducted on four RGB-D datasets to verify the effectiveness of CMANet. Compared with 17 RGB-D SOD methods, our model accurately detects salient regions in images and achieves state-of-the-art performance across four evaluation metrics. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
HUMAN beings

Details

Language :
English
ISSN :
20763417
Volume :
14
Issue :
17
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
179649963
Full Text :
https://doi.org/10.3390/app14177440