Back to Search Start Over

Text-conditioned Transformer for automatic pronunciation error detection.

Authors :
Zhang, Zhan
Wang, Yuehai
Yang, Jianyi
Source :
Speech Communication. Jun2021, Vol. 130, p55-63. 9p.
Publication Year :
2021

Abstract

Automatic pronunciation error detection (APED) plays an important role in the domain of language learning. As for the previous ASR-based APED methods, the decoded results need to be aligned with the target text so that the errors can be found out. However, since the decoding process and the alignment process are independent, the prior knowledge about the target text is not fully utilized. In this paper, we propose to use the target text as an extra condition for the Transformer backbone to handle the APED task. The proposed method can output the error states with consideration of the relationship between the input speech and the target text in a fully end-to-end fashion. Meanwhile, as the prior target text is used as a condition for the decoder input, the Transformer works in a feed-forward manner instead of autoregressive in the inference stage, which can significantly boost the speed in the actual deployment. We set the ASR-based Transformer as the baseline APED model and conduct several experiments on the L2-Arctic dataset. The results demonstrate that our approach can obtain 8.4% relative improvement on the F 1 score metric. • Incorporate the target text into the automatic pronunciation error detection task. • Fully end-to-end feed-forward Transformer. • A more reasonable false rejection rate and the false acceptance rate. • The degree of strictness can be easily adjusted. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01676393
Volume :
130
Database :
Academic Search Index
Journal :
Speech Communication
Publication Type :
Academic Journal
Accession number :
150146734
Full Text :
https://doi.org/10.1016/j.specom.2021.04.004