Back to Search Start Over

Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning

Authors :
Shi, Junyu
Wan, Wei
Hu, Shengshan
Lu, Jianrong
Zhang, Leo Yu
Publication Year :
2021

Abstract

Recently emerged federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous. Compared with the traditional machine learning framework that collects user data for centralized storage, which brings huge communication burden and concerns about data privacy, this approach can not only save the network bandwidth but also protect the data privacy. Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well. In this paper, we conduct a comprehensive investigation of the state-of-the-art strategies for defending against byzantine attacks in FL. We first provide a taxonomy for the existing defense solutions according to the techniques they used, followed by an across-the-board comparison and discussion. Then we propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat. The results show that existing defense solutions, although abundant, are still far from fully protecting FL. Finally, we indicate possible countermeasures for weight attack, and highlight several challenges and future research directions for mitigating byzantine attacks in FL.<br />Comment: The paper has been accepted by the 21st IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom-22)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2112.14468
Document Type :
Working Paper