Back to Search Start Over

E-Branchformer: Branchformer with Enhanced merging for speech recognition

Authors :
Kim, Kwangyoun
Wu, Felix
Peng, Yifan
Pan, Jing
Sridhar, Prashant
Han, Kyu J.
Watanabe, Shinji
Publication Year :
2022

Abstract

Conformer, combining convolution and self-attention sequentially to capture both local and global information, has shown remarkable performance and is currently regarded as the state-of-the-art for automatic speech recognition (ASR). Several other studies have explored integrating convolution and self-attention but they have not managed to match Conformer's performance. The recently introduced Branchformer achieves comparable performance to Conformer by using dedicated branches of convolution and self-attention and merging local and global context from each branch. In this paper, we propose E-Branchformer, which enhances Branchformer by applying an effective merging method and stacking additional point-wise modules. E-Branchformer sets new state-of-the-art word error rates (WERs) 1.81% and 3.65% on LibriSpeech test-clean and test-other sets without using any external training data.<br />Comment: Accepted to SLT 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.00077
Document Type :
Working Paper