Back to Search Start Over

Glottal features for classification of phonation type from speech and neck surface accelerometer signals

Authors :
Sudarsana Reddy Kadiri
Paavo Alku
Speech Communication Technology
Dept Signal Process and Acoust
Aalto-yliopisto
Aalto University
Source :
Computer Speech & Language. 70:101232
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

Glottal source characteristics vary between phonation types due to the tension of laryngeal muscles with the respiratory effort. Previous studies in the classification of phonation type have mainly used speech signals recorded by microphone. Recently, two studies were published in the classification of phonation type using neck surface accelerometer (NSA) signals. However, there are no previous studies comparing the use of the acoustic speech signal vs. the NSA signal as input in classifying phonation type. Therefore, the current study investigates simultaneously recorded speech and NSA signals in the classification of three phonation types (breathy, modal, pressed). The general goal is to understand which of the two signals (speech vs. NSA) is more effective in the classification task. We hypothesize that by using the same feature set for both signals, classification accuracy is higher for the NSA signal, which is more closely related to the physical vibration of the vocal folds and less affected by the vocal tract compared to the acoustical speech signal. Glottal source waveforms were computed using two signal processing methods, quasi-closed phase (QCP) glottal inverse filtering and zero frequency filtering (ZFF), and a group of time-domain and frequency-domain scalar features were computed from the obtained waveforms. In addition, the study investigated the use of mel-frequency cepstral coefficients (MFCCs) derived from the glottal source waveforms computed by QCP and ZFF. Classification experiments with support vector machine classifiers revealed that the NSA signal showed better discrimination of the phonation types compared to the speech signal when the same feature set was used. Furthermore, it was observed that the glottal features showed complementary information with the conventional MFCC features resulting in the best classification accuracy both for the NSA signal (86.9%) and the speech signal (80.6%).

Details

ISSN :
08852308
Volume :
70
Database :
OpenAIRE
Journal :
Computer Speech & Language
Accession number :
edsair.doi.dedup.....0c55cf219139c0f3559d74350b2a2efe