Back to Search Start Over

M‐CoTransT: Adaptive spatial continuity in visual tracking

Authors :
Fan, Chunxiao
Zhang, Runqing
Ming, Yue
Source :
IET Computer Vision; June 2022, Vol. 16 Issue: 4 p350-363, 14p
Publication Year :
2022

Abstract

Visual tracking is an important area in computer vision. Based on the Siamese network, current tracking methods employ the self‐attention block in convolutional networks to extract semantic features containing the image structure information of an object. However, spatial continuity is a point of contradiction between two seemingly unrelated challenges, that is, occlusion and similar distractor, in tracking methods. At the same time, it is a spatially discontinuous task to locate a target reappearing after occlusion accurately. The prediction of bounding boxes should be constrained by spatial continuity to prevent them from jumping into similar distractors. This study proposes a novel tracking method for introducing spatial continuity in visual tracking called M‐CoTransT; the novel tracking method is developed through the confidence‐based adaptive Markov motion model (M‐model) and a novel correlation‐based feature fusion network (CoTransT). In particular, the M‐model provides confidence for the nodes of the Markov motion model to estimate the motion state continuity. It also predicts a more accurate search region for CoTransT, which then adds a cross‐correlation branch into the self‐attention tracking network to enhance the continuity of target appearance in the feature space. Extensive experiments on five challenging datasets (LaSOT, GOT‐10k, TrackingNet, OTB‐2015 and UAV123) demonstrated the effectiveness of the proposed M‐CoTransT in visual tracking.

Details

Language :
English
ISSN :
17519632 and 17519640
Volume :
16
Issue :
4
Database :
Supplemental Index
Journal :
IET Computer Vision
Publication Type :
Periodical
Accession number :
ejs59611489
Full Text :
https://doi.org/10.1049/cvi2.12092