Back to Search Start Over

A strategy creating high-resolution adversarial images against convolutional neural networks and a feasibility study on 10 CNNs

Authors :
Franck Leprévost
Ali Osman Topal
Elmir Avdusinovic
Raluca Chitic
Source :
Journal of Information and Telecommunication, Vol 7, Iss 1, Pp 89-119 (2023)
Publication Year :
2023
Publisher :
Taylor & Francis Group, 2023.

Abstract

ABSTRACTTo perform image recognition, Convolutional Neural Networks (CNNs) assess any image by first resizing it to its input size. In particular, high-resolution images are scaled down, say to [Formula: see text] for CNNs trained on ImageNet. So far, existing attacks, aiming at creating an adversarial image that a CNN would misclassify while a human would not notice any difference between the modified and unmodified images, proceed by creating adversarial noise in the [Formula: see text] resized domain and not in the high-resolution domain. The complexity of directly attacking high-resolution images leads to challenges in terms of speed, adversity and visual quality, making these attacks infeasible in practice. We design an indirect attack strategy that lifts to the high-resolution domain any existing attack that works efficiently in the CNN's input size domain. Adversarial noise created via this method is of the same size as the original image. We apply this approach to 10 state-of-the-art CNNs trained on ImageNet, with an evolutionary algorithm-based attack. Our method succeeded in 900 out of 1000 trials to create such adversarial images, that CNNs classify with probability [Formula: see text] in the adversarial category. Our indirect attack is the first effective method at creating adversarial images in the high-resolution domain.

Details

Language :
English
ISSN :
24751839 and 24751847
Volume :
7
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Journal of Information and Telecommunication
Publication Type :
Academic Journal
Accession number :
edsdoj.2866a340f2ac4ecc8739898e1fa9e094
Document Type :
article
Full Text :
https://doi.org/10.1080/24751839.2022.2132586