Back to Search Start Over

MLAAD: The Multi-Language Audio Anti-Spoofing Dataset

Authors :
Müller, Nicolas M.
Kawa, Piotr
Choong, Wei Herng
Casanova, Edresson
Gölge, Eren
Müller, Thorsten
Syga, Piotr
Sperl, Philip
Böttinger, Konstantin
Publication Year :
2024

Abstract

Text-to-Speech (TTS) technology offers notable benefits, such as providing a voice for individuals with speech impairments, but it also facilitates the creation of audio deepfakes and spoofing attacks. AI-based detection methods can help mitigate these risks; however, the performance of such models is inherently dependent on the quality and diversity of their training data. Presently, the available datasets are heavily skewed towards English and Chinese audio, which limits the global applicability of these anti-spoofing systems. To address this limitation, this paper presents the Multi- Language Audio Anti-Spoof Dataset (MLAAD), created using 59 TTS models, comprising 26 different architectures, to generate 175.0 hours of synthetic voice in 23 different languages. We train and evaluate three state-of-the-art deepfake detection models with MLAAD and observe that it demonstrates superior performance over comparable datasets like InTheWild and Fake- OrReal when used as a training resource. Moreover, compared to the renowned ASVspoof 2019 dataset, MLAAD proves to be a complementary resource. In tests across eight datasets, MLAAD and ASVspoof 2019 alternately outperformed each other, each excelling on four datasets. By publishing MLAAD and making a trained model accessible via an interactive webserver, we aim to democratize anti-spoofing technology, making it accessible beyond the realm of specialists, and contributing to global efforts against audio spoofing and deepfakes.<br />Comment: IJCNN 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.09512
Document Type :
Working Paper