1. Cognitive data augmentation for adversarial defense via pixel masking
- Author
-
Mayank Vatsa, Akshay Agarwal, Richa Singh, and Nalini K. Ratha
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,Artificial Intelligence ,Robustness (computer science) ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Sensitivity (control systems) ,Artificial intelligence ,010306 general physics ,business ,Gradient descent ,computer ,Software ,Dropout (neural networks) ,Vulnerability (computing) - Abstract
The vulnerability of deep networks towards adversarial perturbations has motivated the researchers to design detection and mitigation algorithms. Inspired by the dropout and dropconnect algorithms as well as augmentation techniques, this paper presents “PixelMask” based data augmentation as an efficient method of reducing the sensitivity of convolutional neural networks (CNNs) towards adversarial attacks. In the proposed approach, samples generated using PixelMask are used as augmented data, which helps in learning robust CNN models. Experiments performed with multiple databases and architectures show that the proposed PixelMask based data augmentation approach improves the classification performance on adversarially perturbed images. The proposed defense mechanism can be applied effectively for different adversarial attacks and can easily be combined with any deep neural network (DNN) architecture to increase the robustness. The effectiveness of the proposed defense is demonstrated in gray-box, white-box, and unseen train-test attack scenarios. For example, on the CIFAR-10 database under adaptive attack (i.e., projected gradient descent), the proposed PixelMask is able to improve the recognition performance of CNN by at-least 22.69%. Another advantage of the proposed algorithm over several existing defense algorithms is that the proposed defense either is able to retain or increase the classification accuracy of clean examples.
- Published
- 2021