1. Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion Models
- Author
-
Hao, Jiang, Jin, Xiao, Xiaoguang, Hu, Tianyou, Chen, and Jiajia, Zhao
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Diffusion models (DMs) are regarded as one of the most advanced generative models today, yet recent studies suggest that they are vulnerable to backdoor attacks, which establish hidden associations between particular input patterns and model behaviors, compromising model integrity by causing undesirable actions with manipulated inputs. This vulnerability poses substantial risks, including reputational damage to model owners and the dissemination of harmful content. To mitigate the threat of backdoor attacks, there have been some investigations on backdoor detection and model repair. However, previous work fails to reliably purify the models backdoored by state-of-the-art attack methods, rendering the field much underexplored. To bridge this gap, we introduce Diff-Cleanse, a novel two-stage backdoor defense framework specifically designed for DMs. The first stage employs a novel trigger inversion technique to reconstruct the trigger and detect the backdoor, and the second stage utilizes a structural pruning method to eliminate the backdoor. We evaluate our framework on hundreds of DMs that are attacked by three existing backdoor attack methods with a wide range of hyperparameter settings. Extensive experiments demonstrate that Diff-Cleanse achieves nearly 100\% detection accuracy and effectively mitigates backdoor impacts, preserving the model's benign performance with minimal compromise. Our code is avaliable at https://github.com/shymuel/diff-cleanse.
- Published
- 2024