Back to Search
Start Over
MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles
- Publication Year :
- 2021
-
Abstract
- Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which causes serious threats to security-critical applications. This motivated much research on providing mechanisms to make models more robust against adversarial attacks. Unfortunately, most of these defenses, such as gradient masking, are easily overcome through different attack means. In this paper, we propose MUTEN, a low-cost method to improve the success rate of well-known attacks against gradient-masking models. Our idea is to apply the attacks on an ensemble model which is built by mutating the original model elements after training. As we found out that mutant diversity is a key factor in improving success rate, we design a greedy algorithm for generating diverse mutants efficiently. Experimental results on MNIST, SVHN, and CIFAR10 show that MUTEN can increase the success rate of four attacks by up to 0.45.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2109.12838
- Document Type :
- Working Paper