Back to Search Start Over

One-Shot SADI-EPE: A Visual Framework of Event Progress Estimation.

Authors :
Yin, Jianqin
Liu, Xiaoli
Sun, Fuchun
Liu, Huaping
Liu, Zhiqiang
Wang, Bin
Liu, Jun
Yin, Yilong
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Jun2019, Vol. 29 Issue 6, p1659-1671. 13p.
Publication Year :
2019

Abstract

In many practical engineering applications, the number of actions that have been finished should be known, particularly for an untrimmed video sequence that includes an event with a series of actions, it is important to know the number of actions that have been finished. In this paper, we termed this process as visual event progress estimation (EPE). However, the research related to this problem is few in the research community. To solve this problem, a visual human action analysis-based framework, namely one-shot simultaneously action detection and identification (SADI)-EPE, is presented in this paper. The visual EPE is modeled as an online one-shot learning-based problem; sliding window and attention-based bag of key poses formulate our framework. Unlike most of the action analysis methods relying on a number of training data of some predefined classes, our method can realize SADI for any event if one sample of the event is given, which makes it feasible for practical applications. At the same time, not only SADI but also the progress estimation of the event can be realized by our algorithm. In terms of methodology, the key pose is defined by an invariant pose descriptor from skeletal data and silhouette data. Moreover, in order to extract representative and discriminative poses from one training sample, we present a new bidirectional $k$ NN-based attention weighted key pose selection method, which can filter the unrelated actions and model different importance of various key poses. In addition, an attention-based multi-modal fusion scheme, which addresses the difficulty of high-dimensional features and few training samples, is proposed to augment the performance of our algorithm. Finally, we propose an evaluation criterion for the estimation problem. Extensive results demonstrated the efficacy of our proposed framework. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
29
Issue :
6
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
136847402
Full Text :
https://doi.org/10.1109/TCSVT.2018.2847305