Back to Search Start Over

Reconstruction-Based Adversarial Attack Detection in Vision-Based Autonomous Driving Systems

Authors :
Manzoor Hussain
Jang-Eui Hong
Source :
Machine Learning and Knowledge Extraction, Vol 5, Iss 4, Pp 1589-1611 (2023)
Publication Year :
2023
Publisher :
MDPI AG, 2023.

Abstract

The perception system is a safety-critical component that directly impacts the overall safety of autonomous driving systems (ADSs). It is imperative to ensure the robustness of the deep-learning model used in the perception system. However, studies have shown that these models are highly vulnerable to the adversarial perturbation of input data. The existing works mainly focused on studying the impact of these adversarial attacks on classification rather than regression models. Therefore, this paper first introduces two generalized methods for perturbation-based attacks: (1) We used naturally occurring noises to create perturbations in the input data. (2) We introduce a modified square, HopSkipJump, and decision-based/boundary attack to attack the regression models used in ADSs. Then, we propose a deep-autoencoder-based adversarial attack detector. In addition to offline evaluation metrics (e.g., F1 score and precision, etc.), we introduce an online evaluation framework to evaluate the robustness of the model under attack. The framework considers the reconstruction loss of the deep autoencoder that validates the robustness of the models under attack in an end-to-end fashion at runtime. Our experimental results showed that the proposed adversarial attack detector could detect square, HopSkipJump, and decision-based/boundary attacks with a true positive rate (TPR) of 93%.

Details

Language :
English
ISSN :
25044990
Volume :
5
Issue :
4
Database :
Directory of Open Access Journals
Journal :
Machine Learning and Knowledge Extraction
Publication Type :
Academic Journal
Accession number :
edsdoj.0f227c7f488e4d73bbdafd448a8ab134
Document Type :
article
Full Text :
https://doi.org/10.3390/make5040080