1. A Survey and Evaluation of Adversarial Attacks for Object Detection
- Author
-
Nguyen, Khoi Nguyen Tiet, Zhang, Wenyu, Lu, Kangkang, Wu, Yuhuan, Zheng, Xingjian, Tan, Hui Li, and Zhen, Liangli
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Deep learning models excel in various computer vision tasks but are susceptible to adversarial examples-subtle perturbations in input data that lead to incorrect predictions. This vulnerability poses significant risks in safety-critical applications such as autonomous vehicles, security surveillance, and aircraft health monitoring. While numerous surveys focus on adversarial attacks in image classification, the literature on such attacks in object detection is limited. This paper offers a comprehensive taxonomy of adversarial attacks specific to object detection, reviews existing adversarial robustness evaluation metrics, and systematically assesses open-source attack methods and model robustness. Key observations are provided to enhance the understanding of attack effectiveness and corresponding countermeasures. Additionally, we identify crucial research challenges to guide future efforts in securing automated object detection systems., Comment: 14 pages
- Published
- 2024