Back to Search Start Over

Deep Motion-Appearance Convolutions for Robust Visual Tracking

Authors :
Haojie Li
Sihang Wu
Shuangping Huang
Kin-Man Lam
Xiaofen Xing
Source :
IEEE Access, Vol 7, Pp 180451-180466 (2019)
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Visual tracking is a challenging task due to unconstrained appearance variations and dynamic surrounding backgrounds, which basically arise from the complex motion of the target object. Therefore, the information and the correlation between the target motion and its resulting appearance should be considered comprehensively to achieve robust tracking performance. In this paper, we propose a deep neural network for visual tracking, namely the Motion-Appearance Dual (MADual) network, which employs a dual-branch architecture, by using deep two-dimensional (2D) and deep three-dimensional (3D) convolutions to integrate the local and global information of the target object's motion and appearance synchronously. For each frame of a tracking video, 2D convolutional kernels of the deep 2D branch slide over the frame to extract its global spatial-appearance features. Meanwhile, 3D convolutional kernels of the deep 3D branch are used to collaboratively extract the appearance and the associated motion features of the visual target from successive frames. By sliding the 3D convolutional kernels along a video sequence, the model is able to learn the temporal features from previous frames, and therefore, generate the local patch-based motion patterns of the target. Sliding the 2D kernels on a frame and the 3D kernels on a frame cube synchronously enables a better hierarchical motion-appearance integration, and boosts the performance for the visual tracking task. To further improve the tracking precision, an extra ridge-regression model is trained for the tracking process, based not only on the bounding box given in the first frame, but also on its synchro-frame-cube using our proposed Inverse Temporal Training method (ITT). Extensive experiments on popular benchmark datasets, OTB2013, OTB50, OTB2015, UAV123, TC128, VOT2015 and VOT2016, demonstrate that the proposed MADual tracker performs favorably against many state-of-the-art methods.

Details

Language :
English
ISSN :
21693536
Volume :
7
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.2bc87bf8c62472c992fe95d158a4495
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2019.2958405