Back to Search Start Over

MLCA-AVSR: Multi-Layer Cross Attention Fusion based Audio-Visual Speech Recognition

Authors :
Wang, He
Guo, Pengcheng
Zhou, Pan
Xie, Lei
Publication Year :
2024

Abstract

While automatic speech recognition (ASR) systems degrade significantly in noisy environments, audio-visual speech recognition (AVSR) systems aim to complement the audio stream with noise-invariant visual cues and improve the system's robustness. However, current studies mainly focus on fusing the well-learned modality features, like the output of modality-specific encoders, without considering the contextual relationship during the modality feature learning. In this study, we propose a multi-layer cross-attention fusion based AVSR (MLCA-AVSR) approach that promotes representation learning of each modality by fusing them at different levels of audio/visual encoders. Experimental results on the MISP2022-AVSR Challenge dataset show the efficacy of our proposed system, achieving a concatenated minimum permutation character error rate (cpCER) of 30.57% on the Eval set and yielding up to 3.17% relative improvement compared with our previous system which ranked the second place in the challenge. Following the fusion of multiple systems, our proposed approach surpasses the first-place system, establishing a new SOTA cpCER of 29.13% on this dataset.<br />Comment: 5 pages, 3 figures Accepted at ICASSP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.03424
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/ICASSP48485.2024.10446769