1. Rethinking data augmentation for adversarial robustness.
- Author
-
Eghbal-zadeh, Hamid, Zellinger, Werner, Pintor, Maura, Grosse, Kathrin, Koutini, Khaled, Moser, Bernhard A., Biggio, Battista, and Widmer, Gerhard
- Subjects
- *
DATA augmentation , *ARTIFICIAL neural networks , *CLASSICAL test theory , *MENTAL rotation - Abstract
Recent work has proposed novel data augmentation methods to improve the adversarial robustness of deep neural networks. In this paper, we re-evaluate such methods through the lens of different metrics that characterize the augmented manifold, finding contradictory evidence. Our extensive empirical analysis involving 5 data augmentation methods, all tested with an increasing probability of augmentation, shows that: (i) novel data augmentation methods proposed to improve adversarial robustness only improve it when combined with classical augmentations (like image flipping and rotation), and even worsen adversarial robustness if used in isolation; and (ii) adversarial robustness is significantly affected by the augmentation probability, conversely to what is claimed in recent work. We conclude by discussing how to rethink the development and evaluation of novel data augmentation methods for adversarial robustness. Our open-source code is available at https://github.com/eghbalz/rethink_da_for_ar. • Augmentation methods for adversarial robustness are often not tested in isolation. • They are often tested on one single value of augmentation probability. • They improve robustness only when combined with classical augmentations. • The augmentation probability significantly affects adversarial robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF