Back to Search Start Over

Adversarial Training Time Attack Against Discriminative and Generative Convolutional Models

Authors :
Subhajit Chaudhury
Hiya Roy
Sourav Mishra
Toshihiko Yamasaki
Source :
IEEE Access, Vol 9, Pp 109241-109259 (2021)
Publication Year :
2021
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2021.

Abstract

In this paper, we show that adversarial training time attacks by a few pixel modifications can cause undesirable overfitting in neural networks for both discriminative and generative models. We propose an evolutionary algorithm to search for an optimal pixel attack using a novel cost function inspired by domain adaptation literature to design our training time attack. The proposed cost function explicitly maximizes the generalization gap and domain divergence between clean and corrupted images. Empirical evaluations demonstrate that our adversarial training attack can achieve significantly low testing accuracy (with high training accuracy) on multiple datasets by just perturbing a single pixel in the training images. Even under the use of popular regularization techniques, we identify a significant performance drop compared to clean data training. Our attack is more successful than previous pixel-based training time attacks on state-of-the-art Convolutional Neural Networks (CNNs) architectures, as evidenced by significantly lower testing accuracy. Interestingly, we find that the choice of optimization plays an essential role in robustness against our attack. We empirically observe that Stochastic Gradient Descent (SGD) is resilient to the proposed adversarial training attack, different from adaptive optimization techniques such as the popular Adam optimizer. We identify that such vulnerabilities are caused due to over-reliance on the cross-entropy (CE) loss on highly predictive features. Therefore, we propose a robust loss function that maximizes the mutual information between latent features and input images, in addition to optimizing the CE loss. Finally, we show that the discriminator in Generative Adversarial Networks (GANs) can also be attacked by our proposed training time attack resulting in poor generative performance. Our paper is one of the first works to design attacks for generative models.

Details

ISSN :
21693536
Volume :
9
Database :
OpenAIRE
Journal :
IEEE Access
Accession number :
edsair.doi.dedup.....c902fd50e70c6c5b80645cc5373c3fa3