1. Adversarial Scratches: Deployable Attacks to CNN Classifiers
- Author
-
Loris Giulivi, Malhar Jere, Loris Rossi, Farinaz Koushanfar, Gabriela Ciocarlie, Briland Hitaj, and Giacomo Boracchi
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,I.4 ,I.5 ,Computer Vision and Pattern Recognition (cs.CV) ,Adversarial attacks ,Computer Science - Computer Vision and Pattern Recognition ,Adversarial perturbations, Adversarial attacks, Deep learning, Convolutional neural networks, Bézier curves ,Deep learning ,Bézier curves ,Machine Learning (cs.LG) ,Adversarial perturbations ,Artificial Intelligence ,Signal Processing ,Convolutional neural networks ,Computer Vision and Pattern Recognition ,Cryptography and Security (cs.CR) ,Software - Abstract
A growing body of work has shown that deep neural networks are susceptible to adversarial examples. These take the form of small perturbations applied to the model's input which lead to incorrect predictions. Unfortunately, most literature focuses on visually imperceivable perturbations to be applied to digital images that often are, by design, impossible to be deployed to physical targets. We present Adversarial Scratches: a novel L0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks. Adversarial Scratches leverage B\'ezier Curves to reduce the dimension of the search space and possibly constrain the attack to a specific location. We test Adversarial Scratches in several scenarios, including a publicly available API and images of traffic signs. Results show that, often, our attack achieves higher fooling rate than other deployable state-of-the-art methods, while requiring significantly fewer queries and modifying very few pixels., Comment: This work is published at Pattern Recognition (Elsevier). This paper stems from 'Scratch that! An Evolution-based Adversarial Attack against Neural Networks' for which an arXiv preprint is available at arXiv:1912.02316. Further studies led to a complete overhaul of the work, resulting in this paper
- Published
- 2022
- Full Text
- View/download PDF