Back to Search Start Over

A non-global disturbance targeted adversarial example algorithm combined with C&W and Grad-Cam.

Authors :
Zhu, Yinghui
Jiang, Yuzhen
Source :
Neural Computing & Applications. Oct2023, Vol. 35 Issue 29, p21633-21644. 12p.
Publication Year :
2023

Abstract

Adversarial examples are artificially crafted to mislead deep learning systems into making wrong decisions. In the research of attack algorithms against multi-class image classifiers, an improved strategy of applying category explanation to the generation control of targeted adversarial example is proposed to reduce the perturbation noise and improve the adversarial robustness. On the basis of C&W adversarial attack algorithm, the method uses Grad-Cam, a category visualization explanation algorithm of CNN, to dynamically obtain the salient regions according to the signal features of source and target categories during the iterative generation process. The adversarial example of non-global perturbation is finally achieved by gradually shielding the non-salient regions and fine-tuning the perturbation signals. Compared with other similar algorithms under the same conditions, the method enhances the effects of the original image category signal on the perturbation position. And it makes up for the shortcomings of the adversarial example algorithm in terms of interpretability and teaching intuitiveness. Experimental results show that the improved adversarial examples have higher PSNR. In addition, in a variety of different defense processing tests, the examples can keep high adversarial performance and show strong attacking robustness. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
35
Issue :
29
Database :
Academic Search Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
171993344
Full Text :
https://doi.org/10.1007/s00521-023-08921-2