Back to Search Start Over

Query-Efficient Black-Box Adversarial Attacks Guided by a Transfer-Based Prior.

Authors :
Dong, Yinpeng
Cheng, Shuyu
Pang, Tianyu
Su, Hang
Zhu, Jun
Source :
IEEE Transactions on Pattern Analysis & Machine Intelligence; Dec2022, Vol. 44 Issue Part2, p9536-9548, 13p
Publication Year :
2022

Abstract

Adversarial attacks have been extensively studied in recent years since they can identify the vulnerability of deep learning models before deployed. In this paper, we consider the black-box adversarial setting, where the adversary needs to craft adversarial examples without access to the gradients of a target model. Previous methods attempted to approximate the true gradient either by using the transfer gradient of a surrogate white-box model or based on the feedback of model queries. However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information. To address these problems and improve black-box attacks, we propose two prior-guided random gradient-free (PRGF) algorithms based on biased sampling and gradient averaging, respectively. Our methods can take the advantage of a transfer-based prior given by the gradient of a surrogate model and the query information simultaneously. Through theoretical analyses, the transfer-based prior is appropriately integrated with model queries by an optimal coefficient in each method. Extensive experiments demonstrate that, in comparison with the alternative state-of-the-arts, both of our methods require much fewer queries to attack black-box models with higher success rates. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01628828
Volume :
44
Issue :
Part2
Database :
Complementary Index
Journal :
IEEE Transactions on Pattern Analysis & Machine Intelligence
Publication Type :
Academic Journal
Accession number :
160711785
Full Text :
https://doi.org/10.1109/TPAMI.2021.3126733