1. LinkBreaker: Breaking the Backdoor-Trigger Link in DNNs via Neurons Consistency Check
- Author
-
Chen, Z, Wang, S, Fu, A, Gao, Y, Yu, S, Deng, RH, Chen, Z, Wang, S, Fu, A, Gao, Y, Yu, S, and Deng, RH
- Abstract
Backdoor attacks cause model misbehaving by first implanting backdoors in deep neural networks (DNNs) during training and then activating the backdoor via samples with triggers during inference. The compromised models could pose serious security risks to artificial intelligence systems, such as misidentifying 'stop' traffic sign into '80km/h'. In this paper, we investigate the connection characteristic between the backdoor and the trigger in DNNs and observe the fact that the backdoor is implanted via establishing a link between a cluster of neurons, representing the backdoor, and the triggers. Based on this observation, we design LinkBreaker, a new generic scheme for defending against backdoor attacks. In particular, LinkBreaker deploys a neuron consistency check mechanism for identifying compromised neuron set related to the trigger. Then, the LinkBreaker regulates the model to make predictions based on benign neuron set only and thus breaks the link between the backdoor and the trigger. Compared to previous defenses, LinkBreaker offers a more general backdoor countermeasure that is not only effective against input-agnostic backdoors but also source-specific backdoors, which the later can not be defeated by majority of state-of-the-arts. Besides, LinkBreaker is robust against adversarial examples, which, to a large extent, provides a holistic defense against adversarial example attacks on DNNs, while almost all current backdoor defenses do not have such consideration and capability. Extensive experimental evaluations on real datasets demonstrate that LinkBreaker is with high efficacy of suppressing trigger inputs while incurring no noticeable accuracy deterioration on benign inputs.
- Published
- 2022