Back to Search
Start Over
Fusing MFCC and LPC Features Using 1D Triplet CNN for Speaker Recognition in Severely Degraded Audio Signals.
- Source :
- IEEE Transactions on Information Forensics & Security; 2020, Vol. 15, p1616-1629, 14p
- Publication Year :
- 2020
-
Abstract
- Speaker recognition algorithms are negatively impacted by the quality of the input speech signal. In this work, we approach the problem of speaker recognition from severely degraded audio data by judiciously combining two commonly used features: Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC). Our hypothesis rests on the observation that MFCC and LPC capture two distinct aspects of speech, viz., speech perception and speech production. A carefully crafted 1D Triplet Convolutional Neural Network (1D-Triplet-CNN) is used to combine these two features in a novel manner, thereby enhancing the performance of speaker recognition in challenging scenarios. Extensive evaluation on multiple datasets, different types of audio degradations, multi-lingual speech, varying length of audio samples, etc. convey the efficacy of the proposed approach over existing speaker recognition methods, including those based on iVector and xVector. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15566013
- Volume :
- 15
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Information Forensics & Security
- Publication Type :
- Academic Journal
- Accession number :
- 170411387
- Full Text :
- https://doi.org/10.1109/TIFS.2019.2941773