Back to Search
Start Over
Perturbation Analysis of Learning Algorithms: Generation of Adversarial Examples From Classification to Regression.
- Source :
- IEEE Transactions on Signal Processing; Dec2019, Vol. 67 Issue 23, p6078-6091, 14p
- Publication Year :
- 2019
-
Abstract
- Despite the tremendous success of deep neural networks in various learning problems, it has been observed that adding intentionally designed adversarial perturbations to inputs of these architectures leads to erroneous classification with high confidence in the prediction. In this work, we show that adversarial examples can be generated using a generic approach that relies on the perturbation analysis of learning algorithms. Formulated as a convex program, the proposed approach retrieves many current adversarial attacks as special cases. It is used to propose novel attacks against learning algorithms for classification and regression tasks under various new constraints with closed-form solutions in many instances. In particular, we derive new attacks against classification algorithms which are shown to be top-performing on various architectures. Although classification tasks have been the main focus of adversarial attacks, we use the proposed approach to generate adversarial perturbations for various regression tasks. Designed for single pixel and single subset attacks, these attacks are applied to autoencoding, image colorization and real-time object detection tasks, showing that adversarial perturbations can degrade equally gravely the output of regression tasks. 1 In the spirit of encouraging reproducible research, the implementations used in this paper have been made available at: github.com/ebalda/adversarialconvex. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 1053587X
- Volume :
- 67
- Issue :
- 23
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Signal Processing
- Publication Type :
- Academic Journal
- Accession number :
- 140859088
- Full Text :
- https://doi.org/10.1109/TSP.2019.2943232