Back to Search
Start Over
Defense Against Adversarial Attacks in Deep Learning.
- Source :
- Applied Sciences (2076-3417); Jan2019, Vol. 9 Issue 1, p76, 14p
- Publication Year :
- 2019
-
Abstract
- Neural networks are very vulnerable to adversarial examples, which threaten their application in security systems, such as face recognition, and autopilot. In response to this problem, we propose a new defensive strategy. In our strategy, we propose a new deep denoising neural network, which is called UDDN, to remove the noise on adversarial samples. The standard denoiser suffers from the amplification effect, in which the small residual adversarial noise gradually increases and leads to misclassification. The proposed denoiser overcomes this problem by using a special loss function, which is defined as the difference between the model outputs activated by the original image and denoised image. At the same time, we propose a new model training algorithm based on knowledge transfer, which can resist slight image disturbance and make the model generalize better around the training samples. Our proposed defensive strategy is robust against both white-box or black-box attacks. Meanwhile, the strategy is applicable to any deep neural network-based model. In the experiment, we apply the defensive strategy to a face recognition model. The experimental results show that our algorithm can effectively resist adversarial attacks and improve the accuracy of the model. [ABSTRACT FROM AUTHOR]
- Subjects :
- DEEP learning
FACE perception
SECURITY systems
Subjects
Details
- Language :
- English
- ISSN :
- 20763417
- Volume :
- 9
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Applied Sciences (2076-3417)
- Publication Type :
- Academic Journal
- Accession number :
- 134075138
- Full Text :
- https://doi.org/10.3390/app9010076