Back to Search Start Over

Adversarial examples detection through the sensitivity in space mappings.

Authors :
Li, Xurong
Ji, Shouling
Ji, Juntao
Ren, Zhenyu
Wu, Chunming
Li, Bo
Wang, Ting
Source :
IET Computer Vision (Wiley-Blackwell); 2020, Vol. 14 Issue 5, p201-213, 13p
Publication Year :
2020

Abstract

Adversarial examples (AEs) against deep neural networks (DNNs) raise wide concerns about the robustness of DNNs. Existing detection mechanisms are often limited to a given attack algorithm. Therefore, it is highly desirable to develop a robust detection approach that remains effective for a large group of attack algorithms. In addition, most of the existing defences only perform well for small images (e.g. MNIST and Canadian institute for advanced research (CIFAR)) rather than large images (e.g. ImageNet). In this paper, the authors propose a robust and effective defence method for analysing the sensitivity of various AEs, especially in a much harder case (large images). Their method first creates a feature map from the input space to the new feature space, by utilising 19 different feature mapping methods. Then, a detector is learned with the machine-learning algorithm to recognise the unique distribution of AEs. Their extensive evaluations on their proposed detector show that their detector can achieve: (i) low false-positive rate (<1%), (ii) high true-positive rate (higher than 98%), (iii) low overhead (<0.1 s per input), and (iv) good robustness (work well across different learning models, attack algorithms, and parameters), which demonstrate the efficacy of the proposed detector in practise. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
17519632
Volume :
14
Issue :
5
Database :
Complementary Index
Journal :
IET Computer Vision (Wiley-Blackwell)
Publication Type :
Academic Journal
Accession number :
144946905
Full Text :
https://doi.org/10.1049/iet-cvi.2019.0378