Back to Search Start Over

O-1: Self-training with Oracle and 1-best Hypothesis

Authors :
Baskar, Murali Karthick
Rosenberg, Andrew
Ramabhadran, Bhuvana
Audhkhasi, Kartik
Publication Year :
2023

Abstract

We introduce O-1, a new self-training objective to reduce training bias and unify training and evaluation metrics for speech recognition. O-1 is a faster variant of Expected Minimum Bayes Risk (EMBR), that boosts the oracle hypothesis and can accommodate both supervised and unsupervised data. We demonstrate the effectiveness of our approach in terms of recognition on publicly available SpeechStew datasets and a large-scale, in-house data set. On Speechstew, the O-1 objective closes the gap between the actual and oracle performance by 80\% relative compared to EMBR which bridges the gap by 43\% relative. O-1 achieves 13\% to 25\% relative improvement over EMBR on the various datasets that SpeechStew comprises of, and a 12\% relative gap reduction with respect to the oracle WER over EMBR training on the in-house dataset. Overall, O-1 results in a 9\% relative improvement in WER over EMBR, thereby speaking to the scalability of the proposed objective for large-scale datasets.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.07486
Document Type :
Working Paper