Back to Search Start Over

An end-to-end multi-scale network for action prediction in videos

Authors :
Liu, Xiaofa
Yin, Jianqin
Sun, Yuan
Zhang, Zhicheng
Tang, Jin
Publication Year :
2022

Abstract

In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales.Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effectiveness of our method for action prediction in videos.<br />Comment: 12 pages, 6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.01216
Document Type :
Working Paper