1. Image Retrieval Using Multilayer Feature Aggregation Histogram.
- Author
-
Lu, Fen, Liu, Guang-Hai, and Gao, Xiao-Zhi
- Abstract
Aggregating the diverse features into a compact representation is a hot issue in image retrieval. However, aggregating the differential feature of multilayer into a discriminative representation remains challenging. Inspired by the value-guided neural mechanisms, a novel representation method, namely, the multilayer feature aggregation histogram was proposed to image retrieval. It can aggregate multilayer features, such as low-, mid-, and high-layer features, into a discriminative yet compact representation via simulating the neural mechanisms that mediate the ability to make value-guided decisions. The highlights of the proposed method have the following: (1) A detail-attentive map was proposed to represent the aggregation of low- and mid-layer features. It can be well used to evaluate the distinguishable detail feature. (2) A simple yet straightforward aggregation method is proposed to re-evaluate the distinguishable high-layer feature. It can provide aggregated features including detail, object, and semantic by using semantic-attentive map. (3) A novel whitening method, namely difference whitening, is introduced to reduce dimensionality. It did not need to seek a training dataset of semantical similarity and can provide a compact yet discriminative representation. Experiments on the popular benchmark datasets demonstrate the proposed method can obviously increase retrieval performance in terms of mAP metric. The proposed method using 128-dimensionality representation can provide significantly higher mAPs than the DSFH, DWDF, and OSAH methods by 0.083, 0.043, and 0.022 on the Oxford5k dataset and by 0.195, 0.036, and 0.071 on the Paris6k dataset. The difference whitening method can conveniently transfer the deep learning model to a new task. Our method provided competitive performance compared with the existing aggregation methods and can retrieve scene images with similar colors, objects, and semantics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF