1. Pay attention to the speech: COVID-19 diagnosis using machine learning and crowdsourced respiratory and speech recordings
- Author
-
Mahmoud Aly, Safwat M. Ramzy, and Kamel H. Rahouma
- Subjects
Coronavirus disease 2019 (COVID-19) ,Computer science ,Cough sounds ,Machine learning ,computer.software_genre ,Article ,Multiple Models ,Respiratory sounds ,medicine ,Speech ,Human voice ,Sound (medical instrument) ,Single model ,medicine.diagnostic_test ,business.industry ,General Engineering ,COVID-19 ,Engineering (General). Civil engineering (General) ,Breathing ,Artificial intelligence ,TA1-2040 ,Internet of Things ,business ,computer - Abstract
Since the outbreak of COVID-19, many efforts have been made to utilize the respiratory sounds and coughs collected by smartphones for training Machine Learning models to classify and distinguish COVID-19 sounds from healthy ones. Embedding those models into mobile applications or Internet of things devices can make effective COVID-19 pre-screening tools afforded by anyone anywhere. Most of the previous researchers trained their classifiers with respiratory sounds such as breathing or coughs, and they achieved promising results. We claim that using special voice patterns besides other respiratory sounds can achieve better performance. In this study, we used the Coswara dataset where each user has recorded 9 different types of sounds as cough, breathing, and speech labeled with COVID-19 status. A combination of models trained on different sounds can diagnose COVID-19 more accurately than a single model trained on cough or breathing only. Our results show that using simple binary classifiers can achieve an AUC of 96.4% and an accuracy of 96% by averaging the predictions of multiple models trained and evaluated separately on different sound types. Finally, this study aims to draw attention to the importance of the human voice alongside other respiratory sounds for the sound-based COVID-19 diagnosis.
- Published
- 2022