Back to Search Start Over

Rethinking the Backward Propagation for Adversarial Transferability

Authors :
Wang, Xiaosen
Tong, Kangheng
He, Kun
Publication Year :
2023
Publisher :
arXiv, 2023.

Abstract

Transfer-based attacks generate adversarial examples on the surrogate model, which can mislead other black-box models without any access, making it promising to attack real-world applications. Recently, several works have been proposed to boost adversarial transferability, in which the surrogate model is usually overlooked. In this work, we identify that non-linear layers (e.g., ReLU, max-pooling, etc.) truncate the gradient during backward propagation, making the gradient w.r.t.input image imprecise to the loss function. We hypothesize and empirically validate that such truncation undermines the transferability of adversarial examples. Based on these findings, we propose a novel method called Backward Propagation Attack (BPA) to increase the relevance between the gradient w.r.t. input image and loss function so as to generate adversarial examples with higher transferability. Specifically, BPA adopts a non-monotonic function as the derivative of ReLU and incorporates softmax with temperature to smooth the derivative of max-pooling, thereby mitigating the information loss during the backward propagation of gradients. Empirical results on the ImageNet dataset demonstrate that not only does our method substantially boost the adversarial transferability, but it also is general to existing transfer-based attacks.<br />Comment: 14 pages

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....4f5cd7ca8a5f4a98dc2de391ca63fcee
Full Text :
https://doi.org/10.48550/arxiv.2306.12685