Back to Search Start Over

Deep Fusion of Multiple Semantic Cues for Complex Event Recognition.

Authors :
Zhang, Xishan
Zhang, Hanwang
Zhang, Yongdong
Yang
Wang, Meng
Luan, Huanbo
Li, Jintao
Chua, Tat-Seng
Source :
IEEE Transactions on Image Processing; Mar2016, Vol. 25 Issue 3, p1033-1046, 14p
Publication Year :
2016

Abstract

We present a deep learning strategy to fuse multiple semantic cues for complex event recognition. In particular, we tackle the recognition task by answering how to jointly analyze human actions (who is doing what), objects (what), and scenes (where). First, each type of semantic features (e.g., human action trajectories) is fed into a corresponding multi-layer feature abstraction pathway, followed by a fusion layer connecting all the different pathways. Second, the correlations of how the semantic cues interacting with each other are learned in an unsupervised cross-modality autoencoder fashion. Finally, by fine-tuning a large-margin objective deployed on this deep architecture, we are able to answer the question on how the semantic cues of who, what, and where compose a complex event. As compared with the traditional feature fusion methods (e.g., various early or late strategies), our method jointly learns the essential higher level features that are most effective for fusion and recognition. We perform extensive experiments on two real-world complex event video benchmarks, MED’11 and CCV, and demonstrate that our method outperforms the best published results by 21% and 11%, respectively, on an event recognition task. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
25
Issue :
3
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
112441792
Full Text :
https://doi.org/10.1109/TIP.2015.2511585