Back to Search Start Over

Modeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification.

Authors :
Jiang, Yu-Gang
Wu, Zuxuan
Tang, Jinhui
Li, Zechao
Xue, Xiangyang
Chang, Shih-Fu
Source :
IEEE Transactions on Multimedia; Nov2018, Vol. 20 Issue 11, p3137-3147, 11p
Publication Year :
2018

Abstract

Videos are inherently multimodal. This paper studies the problem of exploiting the abundant multimodal clues for improved video classification performance. We introduce a novel hybrid deep learning framework that integrates useful clues from multiple modalities, including static spatial appearance information, motion patterns within a short time window, audio information, as well as long-range temporal dynamics. More specifically, we utilize three Convolutional Neural Networks (CNNs) operating on appearance, motion, and audio signals to extract their corresponding features. We then employ a feature fusion network to derive a unified representation with an aim to capture the relationships among features. Furthermore, to exploit the long-range temporal dynamics in videos, we apply two long short-term memory (LSTM) networks with extracted appearance and motion features as inputs. Finally, we also propose refining the prediction scores by leveraging contextual relationships among video semantics. The hybrid deep learning framework is able to exploit a comprehensive set of multimodal features for video classification. Through an extensive set of experiments, we demonstrate that: 1) LSTM networks that model sequences in an explicitly recurrent manner are highly complementary to the CNN models; 2) the feature fusion network that produces a fused representation through modeling feature relationships outperforms a large set of alternative fusion strategies; and 3) the semantic context of video classes can help further refine the predictions for improved performance. Experimental results on two challenging benchmarks—the UCF-101 and the Columbia Consumer Videos (CCV)—provide strong quantitative evidence that our framework can produce promising results: ${\text{93.1}\%}$ on the UCF-101 and ${\text{84.5}\%}$ on the CCV, outperforming several competing methods with clear margins. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15209210
Volume :
20
Issue :
11
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
132477469
Full Text :
https://doi.org/10.1109/TMM.2018.2823900