Back to Search Start Over

Study of Pre-Processing Defenses Against Adversarial Attacks on State-of-the-Art Speaker Recognition Systems.

Authors :
Joshi, Sonal
Villalba, Jesus
Zelasko, Piotr
Moro-Velazquez, Laureano
Dehak, Najim
Source :
IEEE Transactions on Information Forensics & Security; 2021, Vol. 16, p4811-4826, 16p
Publication Year :
2021

Abstract

Adversarial examples are designed to fool the speaker recognition (SR) system by adding a carefully crafted human-imperceptible noise to the speech signals. Posing a severe security threat to state-of-the-art SR systems, it becomes vital to deep-dive and study their vulnerabilities. Moreover, it is of greater importance to propose countermeasures that can protect the systems against these attacks. Addressing these concerns, we first investigated how state-of-the-art x-vector based SR systems are affected by white-box adversarial attacks, i.e., when the adversary has full knowledge of the system. x-Vector based SR systems are evaluated against white-box adversarial attacks common in the literature like fast gradient sign method (FGSM), basic iterative method (BIM)–a.k.a. iterative-FGSM–, projected gradient descent (PGD), and Carlini-Wagner (CW) attack. To mitigate against these attacks, we investigated four pre-processing defenses which do not need adversarial examples during training. The four pre-processing defenses–viz. randomized smoothing, DefenseGAN, variational autoencoder (VAE), and Parallel WaveGAN vocoder (PWG) are compared against the baseline defense of adversarial training. Performing powerful adaptive white-box adversarial attack (i.e., when the adversary has full knowledge of the system, including the defense), our conclusions indicate that SR systems were extremely vulnerable under BIM, PGD, and CW attacks. Among the proposed pre-processing defenses, PWG combined with randomized smoothing offers the most protection against the attacks, with accuracy averaging 93% compared to 52% in the undefended system and an absolute improvement >90% for BIM attacks with $L_\infty >0.001$ and CW attack. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15566013
Volume :
16
Database :
Complementary Index
Journal :
IEEE Transactions on Information Forensics & Security
Publication Type :
Academic Journal
Accession number :
170411881
Full Text :
https://doi.org/10.1109/TIFS.2021.3116438