1. Fusion of Embeddings Networks for Robust Combination of Text Dependent and Independent Speaker Recognition
- Author
-
Zeya Chen, Andreas Stolcke, Ruirui Li, Oguz H. Elibol, Chelsea J.-T. Ju, and Hongda Mao
- Subjects
FOS: Computer and information sciences ,Fusion ,Computer Science - Machine Learning ,Sound (cs.SD) ,Computer Science - Computation and Language ,Computer science ,Speech recognition ,Speech input ,Speaker recognition ,Combined approach ,Computer Science - Sound ,Machine Learning (cs.LG) ,Score fusion ,Audio and Speech Processing (eess.AS) ,FOS: Electrical engineering, electronic engineering, information engineering ,Speaker identification ,Baseline (configuration management) ,Joint (audio engineering) ,Computation and Language (cs.CL) ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
By implicitly recognizing a user based on his/her speech input, speaker identification enables many downstream applications, such as personalized system behavior and expedited shopping checkouts. Based on whether the speech content is constrained or not, both text-dependent (TD) and text-independent (TI) speaker recognition models may be used. We wish to combine the advantages of both types of models through an ensemble system to make more reliable predictions. However, any such combined approach has to be robust to incomplete inputs, i.e., when either TD or TI input is missing. As a solution we propose a fusion of embeddings network foenet architecture, combining joint learning with neural attention. We compare foenet with four competitive baseline methods on a dataset of voice assistant inputs, and show that it achieves higher accuracy than the baseline and score fusion methods, especially in the presence of incomplete inputs.
- Published
- 2021
- Full Text
- View/download PDF