Back to Search Start Over

SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks on Deep Neural Networks

Authors :
Amira Guesmi
Ihsen Alouani
Mouna Baklouti
Tarek Frikha
Mohamed Abid
Université de Sfax - University of Sfax
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)
Université catholique de Lille (UCL)-Université catholique de Lille (UCL)
COMmunications NUMériques - IEMN (COMNUM - IEMN)
INSA Institut National des Sciences Appliquées Hauts-de-France (INSA Hauts-De-France)
Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN)
Université catholique de Lille (UCL)-Université catholique de Lille (UCL)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)
no information
Source :
IEEE Design & Test, IEEE Design & Test, 2022, 39, pp.63-72. ⟨10.1109/MDAT.2021.3077542⟩
Publication Year :
2022
Publisher :
HAL CCSD, 2022.

Abstract

International audience; Deep Neural Networks (DNNs) have been deployed in a wide range of applications, including safety-critical domains, owing to their proven efficiency in solving complex problems. However, these systems have been shown vulnerable to adversarial attacks: carefully crafted perturbations that threaten their integrity and trustworthiness. Several defenses have been recently proposed. However, most of these techniques are costly to deploy since they require retraining and specific fine-tuning procedures. While there are pre-processing defenses that do not require retraining, these were shown to be ineffective against adaptive white-box attacks. In this paper, we propose a model-agnostic defense against adversarial attacks using stochastic pre-processing. Based on a process of down-sampling/up-sampling, we transform the input to a new sample that is: (i) close enough to the initial input to be classified correctly, and (ii) different enough to ignore any potential adversarial noise within it. The proposed defense is generic, easy to deploy and does not require any specific training or fine tuning. We tested our technique comparatively to state-of-the-art defenses under grey-box and strong white-box scenarios. Experimental results show that our defense achieves robustness of up to 94% and 93% against PGD and Cand#x0026;W attacks, respectively, under strong white-box scenario. IEEE

Details

Language :
English
ISSN :
21682356
Database :
OpenAIRE
Journal :
IEEE Design & Test, IEEE Design & Test, 2022, 39, pp.63-72. ⟨10.1109/MDAT.2021.3077542⟩
Accession number :
edsair.doi.dedup.....e9319c6b6b0be3c9ce7e834a33a6e12a