Back to Search Start Over

Smoothed Inference for Adversarially-Trained Models

Authors :
Nemcovsky, Yaniv
Zheltonozhskii, Evgenii
Baskin, Chaim
Chmiel, Brian
Fishman, Maxim
Bronstein, Alex M.
Mendelson, Avi
Publication Year :
2019

Abstract

Deep neural networks are known to be vulnerable to adversarial attacks. Current methods of defense from such attacks are based on either implicit or explicit regularization, e.g., adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee the performance of a classifier subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks. The proposed technique can be applied on top of any existing adversarial defense, but works particularly well with the randomized approaches. We examine its performance on common white-box (PGD) and black-box (transfer and NAttack) attacks on CIFAR-10 and CIFAR-100, substantially outperforming previous art for most scenarios and comparable on others. For example, we achieve 60.4% accuracy under a PGD attack on CIFAR-10 using ResNet-20, outperforming previous art by 11.7%. Since our method is based on sampling, it lends itself well for trading-off between the model inference complexity and its performance. A reference implementation of the proposed techniques is provided at https://github.com/yanemcovsky/SIAM

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1911.07198
Document Type :
Working Paper