Back to Search Start Over

Efficient and Robust Classification for Sparse Attacks

Authors :
Beliaev, Mark
Delgosha, Payam
Hassani, Hamed
Pedarsani, Ramtin
Publication Year :
2022

Abstract

In the past two decades we have seen the popularity of neural networks increase in conjunction with their classification accuracy. Parallel to this, we have also witnessed how fragile the very same prediction models are: tiny perturbations to the inputs can cause misclassification errors throughout entire datasets. In this paper, we consider perturbations bounded by the $\ell_0$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection. To this end, we propose a novel defense method that consists of "truncation" and "adversarial training". We then theoretically study the Gaussian mixture setting and prove the asymptotic optimality of our proposed classifier. Motivated by the insights we obtain, we extend these components to neural network classifiers. We conduct numerical experiments in the domain of computer vision using the MNIST and CIFAR datasets, demonstrating significant improvement for the robust classification error of neural networks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.09369
Document Type :
Working Paper