Back to Search Start Over

End-to-End Background Subtraction via a Multi-Scale Spatio-Temporal Model

Authors :
Yizhong Yang
Tao Zhang
Jinzhao Hu
Dong Xu
Guangjun Xie
Source :
IEEE Access, Vol 7, Pp 97949-97958 (2019)
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Background subtraction is an important task in computer vision. Traditional approaches usually utilize low-level visual features like color, texture, or edge to build background models. Due to the lack of deep features, they often achieve poor performance when facing complex video scenes such as illumination changes, background, or camera motions, camouflage effects and shadows. Recently, deep learning has shown to perform well in extracting deep features. To improve the robustness of background subtraction, in this paper, we propose an end-to-end multi-scale spatio-temporal (MS-ST) method which is able to extract deep features from video sequences. First, a video clip is input into a convolutional neural network for extracting multi-scale spatial features. Subsequently, to exploit the temporal information, we combine temporal sampling operations and ConvLSTM modules to extract the multi-scale temporal contextual information. Finally, the segmentation result is generated by fusing multi-scale spatio-temporal features. The experimental results on the CDnet-2014 dataset and the LASIESTA dataset demonstrate the effectiveness and superiority of the proposed method.

Details

Language :
English
ISSN :
21693536 and 13498924
Volume :
7
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.4c47372ec66d4bd18b10169a13498924
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2019.2930319