Back to Search Start Over

Multi-way Encoding for Robustness

Authors :
Kim, Donghyun
Bargal, Sarah Adel
Zhang, Jianming
Sclaroff, Stan
Publication Year :
2019

Abstract

Deep models are state-of-the-art for many computer vision tasks including image classification and object detection. However, it has been shown that deep models are vulnerable to adversarial examples. We highlight how one-hot encoding directly contributes to this vulnerability and propose breaking away from this widely-used, but highly-vulnerable mapping. We demonstrate that by leveraging a different output encoding, multi-way encoding, we decorrelate source and target models, making target models more secure. Our approach makes it more difficult for adversaries to find useful gradients for generating adversarial attacks. We present robustness for black-box and white-box attacks on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN. The strength of our approach is also presented in the form of an attack for model watermarking, raising challenges in detecting stolen models.<br />Comment: Accepted at WACV 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1906.02033
Document Type :
Working Paper