Back to Search Start Over

Knowledge-Preserving continual person re-identification using Graph Attention Network.

Authors :
Liu, Zhaoshuo
Feng, Chaolu
Chen, Shuaizheng
Hu, Jun
Source :
Neural Networks. Apr2023, Vol. 161, p105-115. 11p.
Publication Year :
2023

Abstract

Person re-identification (ReID), considered as a sub-problem of image retrieval, is critical for intelligent security. The general practice is to train a deep model on images from a particular scenario (also known as a domain) and perform retrieval tests on images from the same domain. Thus, the model has to be retrained to ensure good performance on unseen domains. Unfortunately, retraining will introduce the so called catastrophic forgetting problem existing in deep learning models. To address this problem, we propose a Continual person re-identification model via a Knowledge-Preserving (CKP) mechanism. The proposed model is able to accumulate knowledge from continuously changing scenarios. The knowledge is updated via a graph attention network from the human cognitive-inspired perspective as the scenario changes. The accumulated knowledge is used to guide the learning process of the proposed model on image samples from new-coming domains. We finally evaluate and compare CKP with fine-tuning, continual learning in image classification and person re-identification, and joint training. Experiments on representative benchmark datasets (Market1501, DukeMTMC, CUHK03, CUHK-SYSU, and MSMT17, which arrive in different orders) demonstrate the advantages of the proposed model in preventing forgetting, and experiments on other benchmark datasets (GRID, SenseReID, CUHK01, CUHK02, VIPER, iLIDS, and PRID, which are not available during training) demonstrate the generalization ability of the proposed model. The CKP outperforms the best comparative model by 0.58% and 0.65% on seen domains (datasets available during training), and by 0.95% and 1.02% on never seen domains (datasets not available during training) in terms of mAP and Rank1, respectively. Arrival order of the training datasets, guidance of accumulated knowledge for learning new knowledge and parameter settings are also discussed. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08936080
Volume :
161
Database :
Academic Search Index
Journal :
Neural Networks
Publication Type :
Academic Journal
Accession number :
162504121
Full Text :
https://doi.org/10.1016/j.neunet.2023.01.033