Back to Search Start Over

Contextual RNN-T For Open Domain ASR

Authors :
Jain, Mahaveer
Keren, Gil
Mahadeokar, Jay
Zweig, Geoffrey
Metze, Florian
Saraf, Yatharth
Publication Year :
2020

Abstract

End-to-end (E2E) systems for automatic speech recognition (ASR), such as RNN Transducer (RNN-T) and Listen-Attend-Spell (LAS) blend the individual components of a traditional hybrid ASR system - acoustic model, language model, pronunciation model - into a single neural network. While this has some nice advantages, it limits the system to be trained using only paired audio and text. Because of this, E2E models tend to have difficulties with correctly recognizing rare words that are not frequently seen during training, such as entity names. In this paper, we propose modifications to the RNN-T model that allow the model to utilize additional metadata text with the objective of improving performance on these named entity words. We evaluate our approach on an in-house dataset sampled from de-identified public social media videos, which represent an open domain ASR task. By using an attention model and a biasing model to leverage the contextual metadata that accompanies a video, we observe a relative improvement of about 16% in Word Error Rate on Named Entities (WER-NE) for videos with related metadata.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2006.03411
Document Type :
Working Paper