1. Visual saliency detection via a recurrent residual convolutional neural network based on densely aggregated features.
- Author
-
Hua, Chunjian, Zou, Xintong, Ling, Yan, and Chen, Ying
- Subjects
- *
CONVOLUTIONAL neural networks , *FEATURE extraction , *MULTILEVEL marketing , *DEEP learning , *CELL aggregation - Abstract
Current visual saliency detection algorithms based on deep learning suffer from reduced detection effect in complex scenes owing to ineffective feature expression and poor generalization. The present study addresses this issue by proposing a recurrent residual network based on dense aggregated features. Firstly, different levels of dense convolutional features are extracted from the ResNeXt101 network. Then, the features of all layers are aggregated under an Atrous spatial pyramid pooling operation, which makes comprehensive use of all possible saliency cues. Finally, the residuals are learned recurrently under a deep supervision mechanism to achieve continuous optimization of the saliency map. Application of the proposed algorithm to publicly available datasets demonstrates that the dense aggregation of features not only enhances the aggregation of effective information within a single layer, but also enhances external interactions between information at different feature levels. As a result, the proposed algorithm provides better detection ability than that of current state-of-the-art algorithms. [Display omitted] • An aggregation module is designed to aggregate densely connected features with different resolutions, which enables fully communication and fusion between different network layers. • An improved recurrent residual refinement mechanism is proposed, in which the residuals are learned recurrently under deep supervision to achieve continuous optimization of the saliency map. • Extensive experiments prove the advantages of DAF-RRN over the state-of-the-arts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF