Back to Search Start Over

Arabic audio clips: Identification and discrimination of authentic Cantillations from imitations.

Authors :
Lataifeh, Mohammed
Elnagar, Ashraf
Shahin, Ismail
Nassif, Ali Bou
Source :
Neurocomputing. Dec2020, Vol. 418, p162-177. 16p.
Publication Year :
2020

Abstract

• A large structured and well-annotated corpus of Arabic audio clips collected from a Quran portal. • Systematic empirical comparison of shallow and DL classifiers on Arabic audio clips. • Ability to discriminate authentic audio clips from closely imitated ones. • Performance evaluation of the well-trained classifiers against human experts. • Superior performance in discriminating authentic Cantillations from imitations. This paper introduces a thorough study of classical and deep learning algorithms implemented for multi-class speaker identification and verification of Qur'anic audio clips. Thirty different reciters and twelve imitators, of top reciters, were evaluated in the study. In addition to identifying the reciter, our objective is to first evaluate different classifiers performing the stated recognition, compare classical vs. deep-based classifiers, and finally benchmark automatic against human expert listeners' accuracy in identifying authentic reciters from imitators. Using different multimedia outlets over the internet, several reciters became more popular than others for their distinct cantillation style. Towards the development of a practical classifying system, a significant dataset of 15810 audio clips is constructed for thirty reciters in addition to 397 clips for top imitators. A combination of perceptual and acoustic features was extracted in order to achieve better classification. The classifying system is implemented using six top-achieving classical and two deep learning classifiers. Finally, we perform a survey engaging human expert listeners' performance in detecting imitators from authentic reciters, against the fine-tuned classifiers to be able to discuss cross-comparative results. The concluded results demonstrated a high accuracy performance for the selected classifiers averaging 98.6 % on the testing dataset, 93 % on a separate testing dataset, 97 % on the problem of discriminating authentic Cantillations from imitations, and outstanding performance on the survey reaching an accuracy rate of 98 % compared to 61 % of human experts. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
418
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
146873110
Full Text :
https://doi.org/10.1016/j.neucom.2020.07.099