Back to Search Start Over

Black-box adversarial attacks by manipulating image attributes.

Authors :
Wei, Xingxing
Guo, Ying
Li, Bo
Source :
Information Sciences. Mar2021, Vol. 550, p285-296. 12p.
Publication Year :
2021

Abstract

Although there exist various adversarial attacking methods, most of them are performed by generating adversarial noises. Inspired by the fact that people usually set different camera parameters to obtain diverse visual styles when taking a picture, we propose the adversarial attributes, which generate adversarial examples by manipulating the image attributes like brightness, contrast, sharpness, chroma to simulate the imaging process. This task is accomplished under the black-box setting, where only the predicted probabilities are known. We formulate this process into an optimization problem. After efficiently solving this problem, the optimal adversarial attributes are obtained with limited queries. To guarantee the realistic effect of adversarial examples, we bound the attribute changes using L p norm versus different p values. Besides, we also give a formal explanation for the adversarial attributes based on the linear nature of Deep Neural Networks (DNNs). Extensive experiments are conducted on two public datasets, including CIFAR-10 and ImageNet with respective to four representative DNNs like VGG16, AlexNet, Inception v3 and Resnet50. The results show that at most 97.79 % of images in CIFAR-10 test dataset and 98.01 % of the ImageNet images can be successfully perturbed to at least one wrong class with only ⩽ 300 queries per image on average. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
550
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
147948177
Full Text :
https://doi.org/10.1016/j.ins.2020.10.028