1. A strategy creating high-resolution adversarial images against convolutional neural networks and a feasibility study on 10 CNNs
- Author
-
Franck Leprévost, Ali Osman Topal, Elmir Avdusinovic, and Raluca Chitic
- Subjects
Black-box attack ,convolutional neural network ,evolutionary algorithm ,high-resolution adversarial image ,Telecommunication ,TK5101-6720 ,Information technology ,T58.5-58.64 - Abstract
ABSTRACTTo perform image recognition, Convolutional Neural Networks (CNNs) assess any image by first resizing it to its input size. In particular, high-resolution images are scaled down, say to [Formula: see text] for CNNs trained on ImageNet. So far, existing attacks, aiming at creating an adversarial image that a CNN would misclassify while a human would not notice any difference between the modified and unmodified images, proceed by creating adversarial noise in the [Formula: see text] resized domain and not in the high-resolution domain. The complexity of directly attacking high-resolution images leads to challenges in terms of speed, adversity and visual quality, making these attacks infeasible in practice. We design an indirect attack strategy that lifts to the high-resolution domain any existing attack that works efficiently in the CNN's input size domain. Adversarial noise created via this method is of the same size as the original image. We apply this approach to 10 state-of-the-art CNNs trained on ImageNet, with an evolutionary algorithm-based attack. Our method succeeded in 900 out of 1000 trials to create such adversarial images, that CNNs classify with probability [Formula: see text] in the adversarial category. Our indirect attack is the first effective method at creating adversarial images in the high-resolution domain.
- Published
- 2023
- Full Text
- View/download PDF