1. Live Streaming Speech Recognition Using Deep Bidirectional LSTM Acoustic Models and Interpolated Language Models
- Author
-
Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació, Generalitat Valenciana, Agencia Estatal de Investigación, European Regional Development Fund, Universitat Politècnica de València, Jorge-Cano, Javier, Giménez Pastor, Adrián, Silvestre Cerdà, Joan Albert, Civera Saiz, Jorge, Sanchis Navarro, José Alberto, Juan, Alfons, Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació, Generalitat Valenciana, Agencia Estatal de Investigación, European Regional Development Fund, Universitat Politècnica de València, Jorge-Cano, Javier, Giménez Pastor, Adrián, Silvestre Cerdà, Joan Albert, Civera Saiz, Jorge, Sanchis Navarro, José Alberto, and Juan, Alfons
- Abstract
[EN] Although Long-Short Term Memory (LSTM) networks and deep Transformers are now extensively used in offline ASR, it is unclear how best offline systems can be adapted to work with them under the streaming setup. After gaining considerable experience on this regard in recent years, in this paper we show how an optimized, low-latency streaming decoder can be built in which bidirectional LSTM acoustic models, together with general interpolated language models, can be nicely integrated with minimal performance degradation. In brief, our streaming decoder consists of a one-pass, real-time search engine relying on a limited-duration window sliding over time and a number of ad hoc acoustic and language model pruning techniques. Extensive empirical assessment is provided on truly streaming tasks derived from the well-known LibriSpeech and TED talks datasets, as well as from TV shows on a main Spanish broadcasting station.
- Published
- 2022