1. Targeted Attention Attack on Deep Learning Models in Road Sign Recognition
- Author
-
Shengli Zhang, Dacheng Tao, Wei Liu, Weifeng Liu, and Xinghao Yang
- Subjects
Artificial neural network ,Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,0805 Distributed Computing, 1005 Communications Technologies ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,020204 information systems ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Traffic sign recognition ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Road sign recognition ,business ,computer ,Information Systems - Abstract
Real-world traffic sign recognition is an important step toward building autonomous vehicles, most of which highly dependent on deep neural networks (DNNs). Recent studies demonstrated that DNNs are surprisingly susceptible to adversarial examples. Many attack methods have been proposed to understand and generate adversarial examples, such as gradient-based attack, score-based attack, decision-based attack, and transfer-based attacks. However, most of these algorithms are ineffective in real-world road sign attack, because 1) iteratively learning perturbations for each frame is not realistic for a fast moving car and 2) most optimization algorithms traverse all pixels equally without considering their diverse contribution. To alleviate these problems, this article proposes the targeted attention attack (TAA) method for real-world road sign attack. Specifically, we have made the following contributions: 1) we leverage the soft attention map to highlight those important pixels and skip those zero-contributed areas—this also helps to generate natural perturbations; 2) we design an efficient universal attack that optimizes a single perturbation/noise based on a set of training images under the guidance of the pretrained attention map; 3) we design a simple objective function that can be easily optimized; and 4) we evaluate the effectiveness of TAA on real-world data sets. Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method. Additionally, our TAA also provides good properties, e.g., transferability and generalization capability. We provide code and data to ensure the reproducibility: https://github.com/AdvAttack/RoadSignAttack .
- Published
- 2021