Back to Search Start Over

Attack Selectivity of Adversarial Examples in Remote Sensing Image Scene Classification

Authors :
Li Chen
Haifeng Li
Guowei Zhu
Qi Li
Jiawei Zhu
Haozhe Huang
Jian Peng
Lin Zhao
Source :
IEEE Access, Vol 8, Pp 137477-137489 (2020)
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

Remote sensing image (RSI) scene classification is the foundation and important technology of ground object detection, land use management and geographic analysis. During recent years, convolutional neural networks (CNNs) have achieved significant success and are widely applied in RSI scene classification. However, crafted images that serve as adversarial examples can potentially fool CNNs with high confidence and are hard for human eyes to interpret. For the increasing security and robust requirements of RSI scene classification, the adversarial example problem poses a serious problem for the classification results derived from systems using CNN models, which has not been fully recognized by previous research. In this study, to explore the properties of adversarial examples of RSI scene classification, we create different scenarios by testing two major attack algorithms (i.e., the fast gradient sign method (FGSM) and basic iterative method (BIM)) trained on different RSI benchmark datasets to fool CNNs (i.e., InceptionV1, ResNet and a simple CNN). In the experiment, our results show that CNNs of RSI scene classification are also vulnerable to adversarial examples, and some of them have a fooling rate of over 80%. These adversarial examples are affected by the architecture of CNNs and the type of RSI dataset. InceptionV1 has a fooling rate of less than 5%, which is lower than the others. Adversarial examples generated on the UCM dataset are easier than other datasets. Importantly, we also find that the classes of adversarial examples have an attack selectivity property. Misclassifications of adversarial examples of RSIs are related to the similarity of the original classes in the CNN feature space. Attack selectivity reveals potential classes of adversarial examples and provides insights into the design of defensive algorithms in future research.

Details

Language :
English
ISSN :
21693536
Volume :
8
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.025d8efc72b2426489f8c37501fde31e
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2020.3011639