Back to Search Start Over

Unsupervised Universal Attribute Modeling for Action Recognition

Authors :
Krishna Mohan Chalavadi
Debaditya Roy
EE Department IIT Hyderabad
Sri Rama Murty Kodukula
Source :
IEEE Transactions on Multimedia. 21:1672-1680
Publication Year :
2019
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2019.

Abstract

A fixed dimensional representation for action clips of varying lengths has been proposed in the literature using aggregation models like bag-of-words and Fisher vector. These representations are high dimensional and require classification techniques for action recognition. In this paper, we propose a framework for unsupervised extraction of a discriminative low-dimensional representation called action-vector. To start with, local spatio-temporal features are utilized to capture the action attributes implicitly in a large Gaussian mixture model called the universal attribute model (UAM). To enhance the contribution of the significant attributes in each action clip, a maximum aposteriori adaptation of the UAM means is performed for each clip. This results in a concatenated mean vector called super action vector (SAV) for each action clip. However, the SAV is still high dimensional because of the presence of redundant attributes. Hence, we employ factor analysis to represent every SAV only in terms of the few important attributes contributing to the action clip. This leads to a low-dimensional representation called action-vector. This entire procedure requires no class labels and produces action-vectors that are distinct representations of each action irrespective of the inter-actor variability encountered in unconstrained videos. An evaluation on trimmed action datasets UCF101 and HMDB51 demonstrates the efficacy of action-vectors for action classification over state-of-the-art techniques. Moreover, we also show that action-vectors can adequately represent untrimmed videos from the THUMOS14 dataset and produce classification results comparable to existing techniques.

Details

ISSN :
19410077 and 15209210
Volume :
21
Database :
OpenAIRE
Journal :
IEEE Transactions on Multimedia
Accession number :
edsair.doi...........e7c672690e7a6aebc2d2faad5d0b35e2
Full Text :
https://doi.org/10.1109/tmm.2018.2887021