Back to Search Start Over

Imperceptible adversarial attacks against traffic scene recognition.

Authors :
Zhu, Yinghui
Jiang, Yuzhen
Source :
Soft Computing - A Fusion of Foundations, Methodologies & Applications. Oct2021, Vol. 25 Issue 20, p13069-13077. 9p.
Publication Year :
2021

Abstract

Adversarial examples have begun to receive widespread attention owning to their potential destructions to the most popular DNNs. They are crafted from original images by embedding well-calculated perturbations. In some cases, the perturbations are so slight that neither human eyes nor detection algorithms can notice them, and this imperceptibility makes them more covert and dangerous. For the sake of investigating the invisible dangers in the applications of traffic DNNs, we focus on imperceptible adversarial attacks on different traffic vision tasks, including traffic sign classification, lane detection and street scene recognition. We propose a universal logits map-based attack architecture against image semantic segmentation and design two targeted attack approaches on it. All the attack algorithms generate the micro-noise adversarial examples by the iterative method of C&W optimization and achieve 100% attack rate with very low distortion, among which, our experimental results indicate that the MAE (mean absolute error) of perturbation noise based on traffic sign classifier attack is as low as 0.562, and the other two algorithms based on semantic segmentation are only 1.503 and 1.574. We believe that our research on imperceptible adversarial attacks has a certain reference value to the security of DNNs applications. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14327643
Volume :
25
Issue :
20
Database :
Academic Search Index
Journal :
Soft Computing - A Fusion of Foundations, Methodologies & Applications
Publication Type :
Academic Journal
Accession number :
152605775
Full Text :
https://doi.org/10.1007/s00500-021-06148-8