Back to Search
Start Over
Modeling Hierarchical Uncertainty for Multimodal Emotion Recognition in Conversation.
- Source :
-
IEEE transactions on cybernetics [IEEE Trans Cybern] 2024 Jan; Vol. 54 (1), pp. 187-198. Date of Electronic Publication: 2023 Dec 20. - Publication Year :
- 2024
-
Abstract
- Approximating the uncertainty of an emotional AI agent is crucial for improving the reliability of such agents and facilitating human-in-the-loop solutions, especially in critical scenarios. However, none of the existing systems for emotion recognition in conversation (ERC) has attempted to estimate the uncertainty of their predictions. In this article, we present HU-Dialogue, which models hierarchical uncertainty for the ERC task. We perturb contextual attention weight values with source-adaptive noises within each modality, as a regularization scheme to model context-level uncertainty and adapt the Bayesian deep learning method to the capsule-based prediction layer to model modality-level uncertainty. Furthermore, a weight-sharing triplet structure with conditional layer normalization is introduced to detect both invariance and equivariance among modalities for ERC. We provide a detailed empirical analysis for extensive experiments, which shows that our model outperforms previous state-of-the-art methods on three popular multimodal ERC datasets.
- Subjects :
- Humans
Uncertainty
Bayes Theorem
Reproducibility of Results
Emotions
Subjects
Details
- Language :
- English
- ISSN :
- 2168-2275
- Volume :
- 54
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- IEEE transactions on cybernetics
- Publication Type :
- Academic Journal
- Accession number :
- 35820006
- Full Text :
- https://doi.org/10.1109/TCYB.2022.3185119