Back to Search Start Over

A three-level framework for affective content analysis and its case studies

Authors :
Xu, M
Wang, J
He, X
Jin, JS
Luo, S
Lu, H
Xu, M
Wang, J
He, X
Jin, JS
Luo, S
Lu, H
Publication Year :
2014

Abstract

Emotional factors directly reflect audiences' attention, evaluation and memory. Recently, video affective content analysis attracts more and more research efforts. Most of the existing methods map low-level affective features directly to emotions by applying machine learning. Compared to human perception process, there is actually a gap between low-level features and high-level human perception of emotion. In order to bridge the gap, we propose a three-level affective content analysis framework by introducing mid-level representation to indicate dialog, audio emotional events (e.g., horror sounds and laughters) and textual concepts (e.g., informative keywords). Mid-level representation is obtained from machine learning on low-level features and used to infer high-level affective content. We further apply the proposed framework and focus on a number of case studies. Audio emotional event, dialog and subtitle are studied to assist affective content detection in different video domains/genres. Multiple modalities are considered for affective analysis, since different modality has its own merit to evoke emotions. Experimental results shows the proposed framework is effective and efficient for affective content analysis. Audio emotional event, dialog and subtitle are promising mid-level representations. © 2012 Springer Science+Business Media, LLC.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1197449951
Document Type :
Electronic Resource