Back to Search Start Over

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

Authors :
Shapira, Avishag
Bitton, Ron
Avraham, Dan
Zolfi, Alon
Elovici, Yuval
Shabtai, Asaf
Publication Year :
2022

Abstract

Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years. These attacks cause the model to make incorrect predictions by placing a patch containing an adversarial pattern on the target object or anywhere within the frame. However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object. In this study, we propose a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO. In our attack, we use (i) a tailored projection function to enable the placement of the adversarial patch on multiple target objects in the image (e.g., cars), each of which may be located a different distance away from the camera or have a different view angle relative to the camera, and (ii) a unique loss function capable of changing the label of the attacked objects. The proposed universal patch, which is trained in the digital domain, is transferable to the physical domain. We performed an extensive evaluation using different types of object detectors, different video streams captured by different cameras, and various target classes, and evaluated different configurations of the adversarial patch in the physical domain.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.08859
Document Type :
Working Paper