Back to Search Start Over

Triple-ATFME: Triple-Branch Attention Fusion Network for Micro-Expression Recognition.

Authors :
Li, Fei
Nie, Ping
You, Meiming
Chen, Zhichao
Wang, Guoqiang
Source :
Arabian Journal for Science & Engineering (Springer Science & Business Media B.V. ). Apr2024, p1-17.
Publication Year :
2024

Abstract

Micro-expressions (MEs) are momentary and subtle facial expression changes that typically have a short duration to reveal people's genuine emotions and thought processes. With the continuous advancement of deep learning techniques, more opportunities and challenges have been presented for research in the field of micro-expression recognition (MER). Due to the limitations and low intensity of ME movements, as well as the presence of subtle motions and limited sample data, a single-stream model structure that extracts features from a single view is unable to capture sufficient information. Therefore, this paper proposes the Triple-branch attention fusion network (Triple-ATFME) for MER. The proposed Triple-ATFME mainly consists of a preprocessing stage, a Triple-branch ShuffleNet module, and an adaptive channel attention module allowing the model to extract multi-view features through a multi-path network. Firstly, the MER framework extracts optical flow features from the cropped initial frame and peak frame of the facial region using the optical flow method. Multiple types of optical flow features are obtained. Secondly, the extracted features are put into the Triple-ATFME network to extract deep-level hidden features. The three sub-modules extract multi-view features. In order to alleviate the issue of the tendency that the model excessively focuses on local information during the feature fusion process, a simple effective adaptive channel fusion attention module (CFAM) is employed to calibrate the channel features and to facilitate the fusion of multi-view features. Finally, ablation experiments are conducted to demonstrate that the multi-view features and the Triple-ATFME network are more effective in learning MEs. Experimental results show that the proposed Triple-ATFME achieves UF1 and UAR of 0.7609 and 0.7565 on the combined dataset MEGC2019, demonstrating superior performance compared to state-of-the-art MER methods as evaluated in leave-one-out validation. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2193567X
Database :
Academic Search Index
Journal :
Arabian Journal for Science & Engineering (Springer Science & Business Media B.V. )
Publication Type :
Academic Journal
Accession number :
176664777
Full Text :
https://doi.org/10.1007/s13369-024-08973-z