Back to Search Start Over

Toward Visual Distortion in Black-Box Attacks.

Authors :
Li, Nannan
Chen, Zhenzhong
Source :
IEEE Transactions on Image Processing. 2021, Vol. 30, p6156-6167. 12p.
Publication Year :
2021

Abstract

Constructing adversarial examples in a black-box threat model injures the original images by introducing visual distortion. In this paper, we propose a novel black-box attack approach that can directly minimize the induced distortion by learning the noise distribution of the adversarial example, assuming only loss-oracle access to the black-box network. To quantify visual distortion, the perceptual distance between the adversarial example and the original image, is introduced in our loss. We first approximate the gradient of the corresponding non-differentiable loss function by sampling noise from the learned noise distribution. Then the distribution is updated using the estimated gradient to reduce visual distortion. The learning continues until an adversarial example is found. We validate the effectiveness of our attack on ImageNet. Our attack results in much lower distortion when compared to the state-of-the-art black-box attacks and achieves 100% success rate on InceptionV3, ResNet50 and VGG16bn. Furthermore, we theoretically prove the convergence of our model. The code is publicly available at https://github.com/Alina-1997/visual-distortion-in-attack. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*NOISE
*DATA visualization
*SUCCESS

Details

Language :
English
ISSN :
10577149
Volume :
30
Database :
Academic Search Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
170077898
Full Text :
https://doi.org/10.1109/TIP.2021.3092822