Back to Search
Start Over
An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
- Publication Year :
- 2019
-
Abstract
- We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. We also propose a new method for detecting when small input perturbations cause classifier errors, and show theoretical guarantees for the performance of this detection method. We present experimental results with a voice recognition system to demonstrate this method. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.<br />Comment: 5 pages
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1901.09413
- Document Type :
- Working Paper