Back to Search Start Over

Comparison of remote experiments using crowdsourcing and laboratory experiments on speech intelligibility

Authors :
Yamamoto, Ayako
Irino, Toshio
Arai, Kenichi
Araki, Shoko
Ogawa, Atsunori
Kinoshita, Keisuke
Nakatani, Tomohiro
Source :
Proc. Interspeech 2021
Publication Year :
2021

Abstract

Many subjective experiments have been performed to develop objective speech intelligibility measures, but the novel coronavirus outbreak has made it very difficult to conduct experiments in a laboratory. One solution is to perform remote testing using crowdsourcing; however, because we cannot control the listening conditions, it is unclear whether the results are entirely reliable. In this study, we compared speech intelligibility scores obtained in remote and laboratory experiments. The results showed that the mean and standard deviation (SD) of the remote experiments' speech reception threshold (SRT) were higher than those of the laboratory experiments. However, the variance in the SRTs across the speech-enhancement conditions revealed similarities, implying that remote testing results may be as useful as laboratory experiments to develop an objective measure. We also show that the practice session scores correlate with the SRT values. This is a priori information before performing the main tests and would be useful for data screening to reduce the variability of the SRT distribution.<br />Comment: This paper was submitted to Interspeech2021

Details

Database :
arXiv
Journal :
Proc. Interspeech 2021
Publication Type :
Report
Accession number :
edsarx.2104.10001
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2021-174