Back to Search Start Over

Spatiotemporal Transformer for Video-based Person Re-identification

Authors :
Zhang, Tianyu
Wei, Longhui
Xie, Lingxi
Zhuang, Zijie
Zhang, Yongfei
Li, Bo
Tian, Qi
Publication Year :
2021

Abstract

Recently, the Transformer module has been transplanted from natural language processing to computer vision. This paper applies the Transformer to video-based person re-identification, where the key issue is to extract the discriminative information from a tracklet. We show that, despite the strong learning ability, the vanilla Transformer suffers from an increased risk of over-fitting, arguably due to a large number of attention parameters and insufficient training data. To solve this problem, we propose a novel pipeline where the model is pre-trained on a set of synthesized video data and then transferred to the downstream domains with the perception-constrained Spatiotemporal Transformer (STT) module and Global Transformer (GT) module. The derived algorithm achieves significant accuracy gain on three popular video-based person re-identification benchmarks, MARS, DukeMTMC-VideoReID, and LS-VID, especially when the training and testing data are from different domains. More importantly, our research sheds light on the application of the Transformer on highly-structured visual data.<br />Comment: 10 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.16469
Document Type :
Working Paper