Back to Search Start Over

Architectural Backdoors in Neural Networks

Authors :
Bober-Irizar, Mikel
Shumailov, Ilia
Zhao, Yiren
Mullins, Robert
Papernot, Nicolas
Publication Year :
2022

Abstract

Machine learning is vulnerable to adversarial manipulation. Previous literature has demonstrated that at the training stage attackers can manipulate data and data sampling procedures to control model behaviour. A common attack goal is to plant backdoors i.e. force the victim model to learn to recognise a trigger known only by the adversary. In this paper, we introduce a new class of backdoor attacks that hide inside model architectures i.e. in the inductive bias of the functions used to train. These backdoors are simple to implement, for instance by publishing open-source code for a backdoored model architecture that others will reuse unknowingly. We demonstrate that model architectural backdoors represent a real threat and, unlike other approaches, can survive a complete re-training from scratch. We formalise the main construction principles behind architectural backdoors, such as a link between the input and the output, and describe some possible protections against them. We evaluate our attacks on computer vision benchmarks of different scales and demonstrate the underlying vulnerability is pervasive in a variety of training settings.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.07840
Document Type :
Working Paper