Back to Search Start Over

Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction

Authors :
Chernikova, Alesia
Oprea, Alina
Nita-Rotaru, Cristina
Kim, BaekGyu
Publication Year :
2019

Abstract

Deep Neural Networks (DNNs) have tremendous potential in advancing the vision for self-driving cars. However, the security of DNN models in this context leads to major safety implications and needs to be better understood. We consider the case study of steering angle prediction from camera images, using the dataset from the 2014 Udacity challenge. We demonstrate for the first time adversarial testing-time attacks for this application for both classification and regression settings. We show that minor modifications to the camera image (an L2 distance of 0.82 for one of the considered models) result in mis-classification of an image to any class of attacker's choice. Furthermore, our regression attack results in a significant increase in Mean Square Error (MSE) by a factor of 69 in the worst case.<br />Comment: Preprint of the work accepted for publication at the IEEE Workshop on the Internet of Safe Things, San Francisco, CA, USA, May 23, 2019

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1904.07370
Document Type :
Working Paper