301. Adversarial attacks and active defense on deep learning based identification of GaN power amplifiers under physical perturbation
- Author
-
Yuqing Xu, Guangxia Xu, Zeliang An, Martin Hedegaard Nielsen, and Ming Shen
- Subjects
Feature-level interpretability ,Adversarial attacks ,Convolutional neural network ,Deep learning ,Radiofrequency fingerprinting identification ,Electrical and Electronic Engineering - Abstract
Deep learning (DL)-based radiofrequency (RF) fingerprinting identification has shown significantly growing importance in the wireless industry including 5G, IoT and Wireless Sensor Networks (WSN). However, there is little investigation on the robustness of the corresponding DL models against adversarial attacks and adversarial defense. This paper discusses the effects of four cutting-edge adversarial attacks on DL-based RF fingerprinting identification and provides a graphical analysis of RF feature perturbations following adversarial attacks. For the first time we also demonstrate the results of combined physical attack (i.e. tuning the drain currents of the power amplifiers) by tuning the physical features of the device hardware (i.e. drain current), and adversarial attacks on DL-based classifiers. Experimental results from 16 Gallium nitride (GaN) power amplifiers (PA) reveal that adversarial attacks can seriously deteriorate the accuracy of RF identification from about 100.00% to below 10.00% even with only 0.50% adversarial perturbation. In order to deal with the problem of decreased accuracy caused by attacks, we proposed an adversarial defense method based on feature-level interpretability (FIAT), which improved the accuracy of RF identification with interpretability analysis. The results compared with several common defense methods validate that the proposed FIAT can maintain an accuracy of 96.00% even when the adversarial attack perturbation is 5.00% and with physical attack. Moreover, the changes of data features in the process of adversarial attacks and defense are given, providing a new way for the interpretability of adversarial attacks and defense on RF identification tasks.
- Published
- 2023