Back to Search
Start Over
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency
- Publication Year :
- 2020
-
Abstract
- There has been a recent surge in research on adversarial perturbations that defeat Deep Neural Networks (DNNs) in machine vision; most of these perturbation-based attacks target object classifiers. Inspired by the observation that humans are able to recognize objects that appear out of place in a scene or along with other unlikely objects, we augment the DNN with a system that learns context consistency rules during training and checks for the violations of the same during testing. Our approach builds a set of auto-encoders, one for each object class, appropriately trained so as to output a discrepancy between the input and output if an added adversarial perturbation violates context consistency rules. Experiments on PASCAL VOC and MS COCO show that our method effectively detects various adversarial attacks and achieves high ROC-AUC (over 0.95 in most cases); this corresponds to over 20% improvement over a state-of-the-art context-agnostic method.<br />Comment: The paper is accepted by ECCV 2020
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2007.09763
- Document Type :
- Working Paper