Back to Search Start Over

Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency

Authors :
Liu, Xiaogeng
Li, Minghui
Wang, Haoyu
Hu, Shengshan
Ye, Dengpan
Jin, Hai
Wu, Libing
Xiao, Chaowei
Liu, Xiaogeng
Li, Minghui
Wang, Haoyu
Hu, Shengshan
Ye, Dengpan
Jin, Hai
Wu, Libing
Xiao, Chaowei
Publication Year :
2023

Abstract

Deep neural networks are proven to be vulnerable to backdoor attacks. Detecting the trigger samples during the inference stage, i.e., the test-time trigger sample detection, can prevent the backdoor from being triggered. However, existing detection methods often require the defenders to have high accessibility to victim models, extra clean data, or knowledge about the appearance of backdoor triggers, limiting their practicality. In this paper, we propose the test-time corruption robustness consistency evaluation (TeCo), a novel test-time trigger sample detection method that only needs the hard-label outputs of the victim models without any extra information. Our journey begins with the intriguing observation that the backdoor-infected models have similar performance across different image corruptions for the clean images, but perform discrepantly for the trigger samples. Based on this phenomenon, we design TeCo to evaluate test-time robustness consistency by calculating the deviation of severity that leads to predictions' transition across different corruptions. Extensive experiments demonstrate that compared with state-of-the-art defenses, which even require either certain information about the trigger types or accessibility of clean data, TeCo outperforms them on different backdoor attacks, datasets, and model architectures, enjoying a higher AUROC by 10% and 5 times of stability.<br />Comment: Accepted by CVPR2023. Code is available at https://github.com/CGCL-codes/TeCo

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381614417
Document Type :
Electronic Resource