1. Game Theoretical Adversarial Deep Learning With Variational Adversaries
- Author
-
Aneesh Sreevallabh Chivukula, Xinghao Yang, Wei Liu, Wanlei Zhou, and Tianqing Zhu
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Stochastic game ,Adversary ,Convolutional neural network ,Autoencoder ,Computer Science Applications ,symbols.namesake ,Computational Theory and Mathematics ,Nash equilibrium ,Stackelberg competition ,symbols ,08 Information and Computing Sciences ,Artificial intelligence ,business ,Game theory ,Information Systems - Abstract
A critical challenge in machine learning is the vulnerability of learning models in defending attacks from malicious adversaries. In this research, we propose game theoretical learning between a variational adversary and a Convolutional Neural Network (CNN), participating in a variable-sum two-player sequential Stackelberg game. Our adversary manipulates the input data distribution to make the CNN misclassify the manipulated data. Our ideal adversarial manipulation is a minimum change to the data which yet is large enough to mislead the CNNs. We propose an optimization procedure to find optimal adversarial manipulations by solving for the Nash equilibrium of the Stackelberg game. Specifically, the adversary's payoff function depends on the data manipulation which is determined by a Variational Autoencoder, while the CNN classifier's payoff functions are evaluated by misclassification errors. The optimization of our adversarial manipulations is defined by Alternating Least Squares and Simulated Annealing. Experimental results demonstrate that our game-theoretic manipulations are able to mislead CNNs that are well trained on the original data as well as on data generated by other models. We then let the CNNs to incorporate our manipulated data which leads to secure classifiers that are empirically the most robust in defending various types of adversarial attacks.
- Published
- 2021