Back to Search Start Over

Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks

Authors :
Cassia Valentini-Botinhao
Manuel Sam Ribeiro
Oliver Watts
Korin Richmond
Gustav Eje Henter
Ko, Hanseok
Hansen, John H. L.
Source :
Valentini-Botinhao, C, Ribeiro, M S, Watts, O, Richmond, K & Eje Henter, G 2022, Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks . in H Ko & J H L Hansen (eds), Proceedings of Interspeech 2022 . pp. 471-475, Interspeech 2022, Incheon, Korea, Democratic People's Republic of, 18/09/22 . https://doi.org/10.21437/Interspeech.2022-10132
Publication Year :
2022
Publisher :
ISCA, 2022.

Abstract

Automatically predicting the outcome of subjective listening tests is a challenging task. Ratings may vary from person to person even if preferences are consistent across listeners. While previous work has focused on predicting listeners' ratings (mean opinion scores) of individual stimuli, we focus on the simpler task of predicting subjective preference given two speech stimuli for the same text. We propose a model based on anti-symmetric twin neural networks, trained on pairs of waveforms and their corresponding preference scores. We explore both attention and recurrent neural nets to account for the fact that stimuli in a pair are not time aligned. To obtain a large training set we convert listeners' ratings from MUSHRA tests to values that reflect how often one stimulus in the pair was rated higher than the other. Specifically, we evaluate performance on data obtained from twelve MUSHRA evaluations conducted over five years, containing different TTS systems, built from data of different speakers. Our results compare favourably to a state-of-the-art model trained to predict MOS scores.

Details

Database :
OpenAIRE
Journal :
Interspeech 2022
Accession number :
edsair.doi.dedup.....f87893a6a35973d540b30a8388ba01a0
Full Text :
https://doi.org/10.21437/interspeech.2022-10132