Back to Search Start Over

Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.

Authors :
Deng, Yingpeng
Karam, Lina J.
Source :
IEEE Transactions on Image Processing; 2022, Vol. 31, p5856-5868, 13p
Publication Year :
2022

Abstract

Although deep neural networks (DNNs) have been shown to be susceptible to image-agnostic adversarial attacks on natural image classification problems, the effects of such attacks on DNN-based texture recognition have yet to be explored. As part of our work, we find that limiting the perturbation’s $l_{p}$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images. Based on the fact that human perception is affected by local visual frequency characteristics, we propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain. Our experiments indicate that our proposed method can produce less perceptible perturbations yet with a similar or higher white-box fooling rates on various DNN texture classifiers and texture datasets as compared to existing universal attack techniques. We also demonstrate that our approach can improve the attack robustness against defended models as well as the cross-dataset transferability for texture recognition problems. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
31
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
170077353
Full Text :
https://doi.org/10.1109/TIP.2022.3202366