Back to Search
Start Over
Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems
- Source :
- Peer-to-Peer Networking and Applications. 14:1458-1474
- Publication Year :
- 2021
- Publisher :
- Springer Science and Business Media LLC, 2021.
-
Abstract
- Deep neural network (DNN) based face recognition system has become one of the most popular modalities for user identity authentication. However, some recent studies have indicated that, the malicious attackers can inject specific backdoors into the DNN model of a face recognition system, which is known as backdoor attack. As a result, the attacker can trigger the backdoors and impersonate someone else to log into the system, while not affecting the normal usage of the legitimate users. Existing studies use the accessories (such as purple sunglasses or bandanna) as the triggers of their backdoor attacks, which are visually conspicuous and can be easily perceptible by humans, thus result in the failure of backdoor attacks. In this paper, for the first time, we exploit the facial features as the carriers to embed the backdoors, and propose a novel backdoor attack method, named BHF2 (Backdoor Hidden in Facial Features). The BHF2 constructs the masks with the shapes of facial features (eyebrows and beard), and then injects the backdoors into the masks to ensure the visual stealthiness. Further, to make the backdoors look more natural, we propose BHF2N (Backdoor Hidden in Facial Features Naturally) method, which exploits the artificial intelligence (AI) based tool to auto-embed the natural backdoors. The generated backdoors are visually stealthy, which can guarantee the concealment of the backdoor attacks. The proposed methods (BHF2 and BHF2N) can be applied for those black-box attack scenarios, in which a malicious adversary has no knowledge of the target face recognition system. Moreover, the proposed attack methods are feasible for those strict identity authentication scenarios where the accessories are not permitted. Experimental results on two state-of-the-art face recognition models show that, the maximum success rate of the proposed attack method reaches 100% on DeepID1 and VGGFace models, while the accuracy degradation of target recognition models are as low as 0.01% (DeepID1) and 0.02% (VGGFace), respectively. Meantime, the generated backdoors can achieve visual stealthiness, where the pixel change rate of a backdoor instance relative to its clean face image is as low as 0.16%, and their structural and dHash similarity score are high up to 98.82% and 98.19%, respectively.
- Subjects :
- 021110 strategic, defence & security studies
Authentication
Exploit
Artificial neural network
Computer Networks and Communications
Computer science
0211 other engineering and technologies
02 engineering and technology
Computer security
computer.software_genre
Facial recognition system
Image (mathematics)
Face (geometry)
0202 electrical engineering, electronic engineering, information engineering
Identity (object-oriented programming)
020201 artificial intelligence & image processing
computer
Software
Backdoor
Subjects
Details
- ISSN :
- 19366450 and 19366442
- Volume :
- 14
- Database :
- OpenAIRE
- Journal :
- Peer-to-Peer Networking and Applications
- Accession number :
- edsair.doi...........c4e0082ef965c5f71b3d91711ac3cbc7
- Full Text :
- https://doi.org/10.1007/s12083-020-01031-z