1. Feature aggregation and connectivity for object re-identification.
- Author
-
Han, Dongchen, Liu, Baodi, Shao, Shuai, Liu, Weifeng, and Zhou, Yicong
- Subjects
- *
ARTIFICIAL neural networks , *GRAPH connectivity , *FEATURE extraction , *SUBGRAPHS , *OPEN-ended questions - Abstract
In recent years, object re-identification (ReID) performance based on deep convolutional networks has reached a very high level and has seen outstanding progress. The existing methods merely focus on the robustness of features and classification accuracy but ignore the relationship among different features (i.e., the relationship between gallery–gallery pairs or probe–gallery pairs). In particular, a probe located at the decision boundary is the key to suppressing object ReID performance. We consider this probe as a hard sample. Recent studies have shown that Graph Convolutional Networks (GCN) significantly improve the relationship among features. However, applying the GCN to object ReID is still an open question. This paper proposes two learnable GCN modules: the Feature Aggregation Graph Convolutional Network (FA-GCN) and the Evaluation Connectivity Graph Convolutional Network (EC-GCN). Specifically, the pre-work selects an arbitrary feature extraction network to extract features in the object ReID dataset. Given a probe, FA-GCN aggregates neighboring nodes through the affinity graph of the gallery set. Afterward, EC-GCN uses a random probability gallery sampler to construct subgraphs for evaluating the connectivity of probe–gallery pairs. Finally, we jointly aggregate the node features and connectivity ratios as a new distance matrix. Experimental results on two person ReID datasets (Market-1501 and DukeMTMC-ReID) and one vehicle ReID dataset (VeRi-776) show that the proposed method achieves state-of-the-art performance. • We propose a novel framework containing two learnable GCN components. • Our framework can be inserted into arbitrary deep neural networks. • We formulate a new sampling strategy RPGS. • We improve the state-of-the-art methods on both person and vehicle ReID datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF