Back to Search
Start Over
A Novel Adversarial Example Detection Method Based on Frequency Domain Reconstruction for Image Sensors.
- Source :
-
Sensors (14248220) . Sep2024, Vol. 24 Issue 17, p5507. 20p. - Publication Year :
- 2024
-
Abstract
- Convolutional neural networks (CNNs) have been extensively used in numerous remote sensing image detection tasks owing to their exceptional performance. Nevertheless, CNNs are often vulnerable to adversarial examples, limiting the uses in different safety-critical scenarios. Recently, how to efficiently detect adversarial examples and improve the robustness of CNNs has drawn considerable focus. The existing adversarial example detection methods require modifying CNNs, which not only affects the model performance but also greatly enhances training cost. With the purpose of solving these problems, this study proposes a detection algorithm for adversarial examples that does not need modification of the CNN models and can simultaneously retain the classification accuracy of normal examples. Specifically, we design a method to detect adversarial examples using frequency domain reconstruction. After converting the input adversarial examples into the frequency domain by Fourier transform, the adversarial disturbance from adversarial attacks can be eliminated by modifying the frequency of the example. The inverse Fourier transform is then used to maximize the recovery of the original example. Firstly, we train a CNN to reconstruct input examples. Then, we insert Fourier transform, convolution operation, and inverse Fourier transform into the features of the input examples to automatically filter out adversarial frequencies. We refer to our proposed method as FDR (frequency domain reconstruction), which removes adversarial interference by converting input samples into frequency and reconstructing them back into the spatial domain to restore the image. In addition, we also introduce gradient masking into the proposed FDR method to enhance the detection accuracy of the model for complex adversarial examples. We conduct extensive experiments on five mainstream adversarial attacks on three benchmark datasets, and the experimental results show that FDR can outperform state-of-the-art solutions in detecting adversarial examples. Additionally, FDR does not require any modifications to the detector and can be integrated with other adversarial example detection methods to be installed in sensing devices to ensure detection safety. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 14248220
- Volume :
- 24
- Issue :
- 17
- Database :
- Academic Search Index
- Journal :
- Sensors (14248220)
- Publication Type :
- Academic Journal
- Accession number :
- 179646449
- Full Text :
- https://doi.org/10.3390/s24175507