Back to Search Start Over

How does end-to-end speech recognition training impact speech enhancement artifacts?

Authors :
Iwamoto, Kazuma
Ochiai, Tsubasa
Delcroix, Marc
Ikeshita, Rintaro
Sato, Hiroshi
Araki, Shoko
Katagiri, Shigeru
Publication Year :
2023

Abstract

Jointly training a speech enhancement (SE) front-end and an automatic speech recognition (ASR) back-end has been investigated as a way to mitigate the influence of \emph{processing distortion} generated by single-channel SE on ASR. In this paper, we investigate the effect of such joint training on the signal-level characteristics of the enhanced signals from the viewpoint of the decomposed noise and artifact errors. The experimental analyses provide two novel findings: 1) ASR-level training of the SE front-end reduces the artifact errors while increasing the noise errors, and 2) simply interpolating the enhanced and observed signals, which achieves a similar effect of reducing artifacts and increasing noise, improves ASR performance without jointly modifying the SE and ASR modules, even for a strong ASR back-end using a WavLM feature extractor. Our findings provide a better understanding of the effect of joint training and a novel insight for designing an ASR agnostic SE front-end.<br />Comment: 5 pages, 1 figure, 1 table

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.11599
Document Type :
Working Paper