Back to Search Start Over

Understanding Byzantine Robustness in Federated Learning with A Black-box Server

Authors :
Zhao, Fangyuan
Xie, Yuexiang
Ren, Xuebin
Ding, Bolin
Yang, Shusen
Li, Yaliang
Publication Year :
2024

Abstract

Federated learning (FL) becomes vulnerable to Byzantine attacks where some of participators tend to damage the utility or discourage the convergence of the learned model via sending their malicious model updates. Previous works propose to apply robust rules to aggregate updates from participators against different types of Byzantine attacks, while at the same time, attackers can further design advanced Byzantine attack algorithms targeting specific aggregation rule when it is known. In practice, FL systems can involve a black-box server that makes the adopted aggregation rule inaccessible to participants, which can naturally defend or weaken some Byzantine attacks. In this paper, we provide an in-depth understanding on the Byzantine robustness of the FL system with a black-box server. Our investigation demonstrates the improved Byzantine robustness of a black-box server employing a dynamic defense strategy. We provide both empirical evidence and theoretical analysis to reveal that the black-box server can mitigate the worst-case attack impact from a maximum level to an expectation level, which is attributed to the inherent inaccessibility and randomness offered by a black-box server.The source code is available at https://github.com/alibaba/FederatedScope/tree/Byzantine_attack_defense to promote further research in the community.<br />Comment: We have released code on https://github.com/alibaba/FederatedScope/tree/Byzantine_attack_defense

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.06042
Document Type :
Working Paper