Back to Search Start Over

Assessing Optimizer Impact on DNN Model Sensitivity to Adversarial Examples

Authors :
Yixiang Wang
Jiqiang Liu
Jelena Misic
Vojislav B. Misic
Shaohua Lv
Xiaolin Chang
Source :
IEEE Access, Vol 7, Pp 152766-152776 (2019)
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Deep Neural Networks (DNNs) have been gaining state-of-the-art achievement compared with many traditional Machine Learning (ML) models in diverse fields. However, adversarial examples challenge the further deployment and application of DNNs. Analysis has been carried out for studying the reasons of DNNs' vulnerability to adversarial perturbation and focused on model architecture. No research has been done on investigating the impact of optimization algorithms (namely, optimizers in DNNs) employed in training DNN models on models' sensitivity to adversarial examples. This paper aims to study this impact from an experimental perspective. We analyze the sensitivity of a model not only from the aspect of white-box and black-box attack setups, but also from the aspect of different types of datasets. Four common optimizers, SGD, RMSprop, Adadelta, and Adam, are investigated on structured and unstructured datasets. Extensive experiment results indicate that an optimization algorithm does pose effects on the DNN model sensitivity to adversarial examples. That is, when training models and generating adversarial examples, Adam optimizer can generate better quality adversarial examples for structured datasets, and Adadelta optimizer can generate better quality adversarial examples for unstructured datasets. In addition, the choice of optimizers does not affect the transferability of adversarial examples.

Details

Language :
English
ISSN :
21693536
Volume :
7
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.6ed17233434243aea6ee86b5ef916b88
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2019.2948658