Back to Search
Start Over
Improved Video Emotion Recognition With Alignment of CNN and Human Brain Representations.
- Source :
- IEEE Transactions on Affective Computing; Jul-Sep2024, Vol. 15 Issue 3, p1026-1040, 15p
- Publication Year :
- 2024
-
Abstract
- The ability to perceive emotions is an important criterion for judging whether a machine is intelligent. To this end, a large number of emotion recognition algorithms have been developed especially for visual information such as video. Most previous studies are based on hand-crafted features or CNN, in which the former fails to extract expressive features and the latter still faces the undesired affective gap. This drives us to think about what if we could incorporate the human emotional perception capability into CNN. In this paper, we attempt to address this question by exploring alignment between representations of neural networks and human brain activity. In particular, we employ a visually evoked emotional brain activity dataset to conduct a jointly training strategy for CNN. In the training phase, we introduce the representation similarity analysis (RSA) to align the CNN with human brain to obtain more brain-like features. Specifically, representation similarity matrices (RSMs) of multiple convolutional layers are averaged with learnable weights and related to the RSM of human brain. In order to obtain emotion-related brain activity, we conduct voxel selection and denoising with a banded ridge model before computing the RSM. Sufficient experiments on two challenging video emotion recognition datasets and multiple popular CNN architectures suggest that human brain activity is promising to provide an inductive bias for CNN towards better performance of emotion recognition. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 19493045
- Volume :
- 15
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Affective Computing
- Publication Type :
- Academic Journal
- Accession number :
- 179509512
- Full Text :
- https://doi.org/10.1109/TAFFC.2023.3316173