Back to Search
Start Over
Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
- Source :
- Algorithms, Vol 17, Iss 4, p 162 (2024)
- Publication Year :
- 2024
- Publisher :
- MDPI AG, 2024.
-
Abstract
- Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L∞-norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network’s latent representations, (4) an analysis of networks’ decision boundaries and (5) the use of equivalence of L2 and L∞ perturbation norm theories.
Details
- Language :
- English
- ISSN :
- 19994893
- Volume :
- 17
- Issue :
- 4
- Database :
- Directory of Open Access Journals
- Journal :
- Algorithms
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.88a4b88a23274898a3899e51a54967fe
- Document Type :
- article
- Full Text :
- https://doi.org/10.3390/a17040162