Back to Search Start Over

All rivers run into the sea: Unified Modality Brain-like Emotional Central Mechanism

Authors :
Mai, Xinji
Lin, Junxiong
Wang, Haoran
Tao, Zeng
Wang, Yan
Yan, Shaoqi
Tong, Xuan
Yu, Jiawen
Wang, Boyang
Zhou, Ziheng
Zhao, Qing
Gao, Shuyong
Zhang, Wenqiang
Publication Year :
2024

Abstract

In the field of affective computing, fully leveraging information from a variety of sensory modalities is essential for the comprehensive understanding and processing of human emotions. Inspired by the process through which the human brain handles emotions and the theory of cross-modal plasticity, we propose UMBEnet, a brain-like unified modal affective processing network. The primary design of UMBEnet includes a Dual-Stream (DS) structure that fuses inherent prompts with a Prompt Pool and a Sparse Feature Fusion (SFF) module. The design of the Prompt Pool is aimed at integrating information from different modalities, while inherent prompts are intended to enhance the system's predictive guidance capabilities and effectively manage knowledge related to emotion classification. Moreover, considering the sparsity of effective information across different modalities, the SSF module aims to make full use of all available sensory data through the sparse integration of modality fusion prompts and inherent prompts, maintaining high adaptability and sensitivity to complex emotional states. Extensive experiments on the largest benchmark datasets in the Dynamic Facial Expression Recognition (DFER) field, including DFEW, FERV39k, and MAFW, have proven that UMBEnet consistently outperforms the current state-of-the-art methods. Notably, in scenarios of Modality Missingness and multimodal contexts, UMBEnet significantly surpasses the leading current methods, demonstrating outstanding performance and adaptability in tasks that involve complex emotional understanding with rich multimodal information.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.15590
Document Type :
Working Paper