Back to Search Start Over

Robust Audiovisual Speech Recognition Models with Mixture-of-Experts

Authors :
Wu, Yihan
Peng, Yifan
Lu, Yichen
Chang, Xuankai
Song, Ruihua
Watanabe, Shinji
Publication Year :
2024

Abstract

Visual signals can enhance audiovisual speech recognition accuracy by providing additional contextual information. Given the complexity of visual signals, an audiovisual speech recognition model requires robust generalization capabilities across diverse video scenarios, presenting a significant challenge. In this paper, we introduce EVA, leveraging the mixture-of-Experts for audioVisual ASR to perform robust speech recognition for ``in-the-wild'' videos. Specifically, we first encode visual information into visual tokens sequence and map them into speech space by a lightweight projection. Then, we build EVA upon a robust pretrained speech recognition model, ensuring its generalization ability. Moreover, to incorporate visual information effectively, we inject visual information into the ASR model through a mixture-of-experts module. Experiments show our model achieves state-of-the-art results on three benchmarks, which demonstrates the generalization ability of EVA across diverse video domains.<br />Comment: 6 pages, 2 figures, accepted by IEEE Spoken Language Technology Workshop 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.12370
Document Type :
Working Paper