1. Gradient Aggregation Boosting Adversarial Examples Transferability Method.
- Author
-
DENG Shiyun and LING Jie
- Subjects
ARTIFICIAL neural networks ,IMAGE recognition (Computer vision) ,NEIGHBORHOODS ,OSCILLATIONS ,SUCCESS - Abstract
Image classification models based on deep neural networks are vulnerable to adversarial examples. Existing studies have shown that white-box attacks have been able to achieve a high attack success rate, but the transferability of adversarial examples is low when attacking other models. In order to improve the transferability of adversarial attacks, this paper proposes a gradient aggregation method to enhance the transferability of adversarial examples. Firstly, the original image is mixed with other class images in a specific ratio to obtain a mixed image. By comprehensively considering the information of different categories of images and balancing the gradient contributions between categories, the influence of local oscillations can be avoided. Secondly, in the iterative process, the gradient information of other data points in the neighborhood of the current point is aggregated to optimize the gradient direction, avoiding excessive dependence on a single data point, and thus generating adversarial examples with stronger mobility. Experimental results on the ImageNet dataset show that the proposed method significantly improves the success rate of black-box attacks and the transferability of adversarial examples. On the single-model attack, the average attack success rate of the method in this paper is 88.5% in the four conventional training models, which is 2.7 percentage points higher than the Admix method; the average attack success rate on the integrated model attack reaches 92.7%. In addition, the proposed method can be integrated with the transformation-based adversarial attack method, and the average attack success rate on the three adversarial training models is 10.1 percentage points, higher than that of the Admix method, which enhances the transferability of adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF