Back to Search Start Over

Cross Complementary Fusion Network for Video Salient Object Detection

Authors :
Junxia Li
Ziyang Wang
Zefeng Pan
Source :
IEEE Access, Vol 8, Pp 201259-201270 (2020)
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

Recently, optical flow guided video saliency detection methods have achieved high performance. However, the computation cost of optical flow is usually expensive, which limits the applications of these methods in time-critical scenarios. In this article, we propose an end-to-end cross complementary network (CCNet) based on fully convolutional network for video saliency detection. The CCNet consists of two effective components: single-image representation enhancement (SRE) module and spatiotemporal information learning (STIL) module. The SRE module provides robust saliency feature learning for a single image through a pyramid pooling module followed by a lightweight channel attention module. As an effective alternative operation of optical flow to extract spatiotemporal information, the STIL introduces a spatiotemporal information fusion module and a video correlation filter to learn the spatiotemporal information, the inner collaborative and interactive information between consecutive input groups. In addition to enhancing the feature representation of a single image, the combination of SRE and STIL can learn the spatiotemporal information and the correlation between consecutive images well. Extensive experimental results demonstrate the effectiveness of our method in comparison with 14 state-of-the-art approaches.

Details

Language :
English
ISSN :
21693536
Volume :
8
Database :
OpenAIRE
Journal :
IEEE Access
Accession number :
edsair.doi.dedup.....5ec6efbcbc1f99d0a99b3c34ed3b1d78