Back to Search Start Over

SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning

Authors :
Choe, Minyeong
Park, Cheolhee
Seo, Changho
Kim, Hyunil
Publication Year :
2024

Abstract

Federated Learning is a promising approach for training machine learning models while preserving data privacy, but its distributed nature makes it vulnerable to backdoor attacks, particularly in NLP tasks while related research remains limited. This paper introduces SDBA, a novel backdoor attack mechanism designed for NLP tasks in FL environments. Our systematic analysis across LSTM and GPT-2 models identifies the most vulnerable layers for backdoor injection and achieves both stealth and long-lasting durability through layer-wise gradient masking and top-k% gradient masking within these layers. Experiments on next token prediction and sentiment analysis tasks show that SDBA outperforms existing backdoors in durability and effectively bypasses representative defense mechanisms, with notable performance in LLM such as GPT-2. These results underscore the need for robust defense strategies in NLP-based FL systems.<br />Comment: 13 pages, 13 figures This work has been submitted to the IEEE for possible publication

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.14805
Document Type :
Working Paper