Back to Search Start Over

A Four-Pronged Defense Against Byzantine Attacks in Federated Learning

Authors :
Wan, Wei
Hu, Shengshan
Li, Minghui
Lu, Jianrong
Zhang, Longling
Zhang, Leo Yu
Jin, Hai
Wan, Wei
Hu, Shengshan
Li, Minghui
Lu, Jianrong
Zhang, Longling
Zhang, Leo Yu
Jin, Hai
Publication Year :
2023

Abstract

\textit{Federated learning} (FL) is a nascent distributed learning paradigm to train a shared global model without violating users' privacy. FL has been shown to be vulnerable to various Byzantine attacks, where malicious participants could independently or collusively upload well-crafted updates to deteriorate the performance of the global model. However, existing defenses could only mitigate part of Byzantine attacks, without providing an all-sided shield for FL. It is difficult to simply combine them as they rely on totally contradictory assumptions. In this paper, we propose FPD, a \underline{\textbf{f}}our-\underline{\textbf{p}}ronged \underline{\textbf{d}}efense against both non-colluding and colluding Byzantine attacks. Our main idea is to utilize absolute similarity to filter updates rather than relative similarity used in existingI works. To this end, we first propose a reliable client selection strategy to prevent the majority of threats in the bud. Then we design a simple but effective score-based detection method to mitigate colluding attacks. Third, we construct an enhanced spectral-based outlier detector to accurately discard abnormal updates when the training data is \textit{not independent and identically distributed} (non-IID). Finally, we design update denoising to rectify the direction of the slightly noisy but harmful updates. The four sequentially combined modules can effectively reconcile the contradiction in addressing non-colluding and colluding Byzantine attacks. Extensive experiments over three benchmark image classification datasets against four state-of-the-art Byzantine attacks demonstrate that FPD drastically outperforms existing defenses in IID and non-IID scenarios (with $30\%$ improvement on model accuracy).<br />Comment: This paper has been accepted by the 31st ACM International Conference on Multimedia (MM '23)

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438469908
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1145.3581783.3612474