Back to Search Start Over

AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack

Authors :
Hyun Kwon
Jun Lee
Source :
IEEE Access, Vol 12, Pp 5345-5356 (2024)
Publication Year :
2024
Publisher :
IEEE, 2024.

Abstract

Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, video recognition, and pattern analysis. However, they are vulnerable to adversarial example attacks. An adversarial example, which is input to which a little bit of noise has been strategically added, appears normal to the human eye but will be misrecognized by the DNN. In this paper, we propose AdvGuard, a method for resisting adversarial example attacks. This defense method prevents the generation of adversarial examples by constructing a robust DNN that provides random confidence values. This method does not require training of adversarial examples, use of other processing modules, or the ability to perform input data filtering. In addition, a DNN constructed using the proposed scheme can defend against adversarial examples while maintaining its accuracy on the original samples. In the experimental evaluation, MNIST and CIFAR10 were used as datasets, and TensorFlow was used as a machine learning library. The results show that a DNN constructed using the proposed method can correctly classify adversarial examples with 100% and 99.5% accuracy on MNIST and CIFAR10, respectively.

Details

Language :
English
ISSN :
21693536
Volume :
12
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.1c5ad8d8a490421dbba836958edb9d23
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2020.3042839