Back to Search
Start Over
Detecting Audio Attacks on ASR Systems with Dropout Uncertainty
- Publication Year :
- 2020
- Publisher :
- arXiv, 2020.
-
Abstract
- Various adversarial audio attacks have recently been developed to fool automatic speech recognition (ASR) systems. We here propose a defense against such attacks based on the uncertainty introduced by dropout in neural networks. We show that our defense is able to detect attacks created through optimized perturbations and frequency masking on a state-of-the-art end-to-end ASR system. Furthermore, the defense can be made robust against attacks that are immune to noise reduction. We test our defense on Mozilla's CommonVoice dataset, the UrbanSound dataset, and an excerpt of the LibriSpeech dataset, showing that it achieves high detection accuracy in a wide range of scenarios.<br />Comment: Accepted for publication at Interspeech 2020
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Sound (cs.SD)
Computer Science - Cryptography and Security
Statistics - Machine Learning
Audio and Speech Processing (eess.AS)
FOS: Electrical engineering, electronic engineering, information engineering
Machine Learning (stat.ML)
Cryptography and Security (cs.CR)
Computer Science - Sound
Electrical Engineering and Systems Science - Audio and Speech Processing
Machine Learning (cs.LG)
Subjects
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....a1c043fc7f2ecfe65ca6e7c7935ec12d
- Full Text :
- https://doi.org/10.48550/arxiv.2006.01906