Back to Search Start Over

Federated Learning Robust to Byzantine Attacks: Achieving Zero Optimality Gap

Authors :
Zuo, Shiyuan
Fan, Rongfei
Hu, Han
Zhang, Ning
Gong, Shimin
Publication Year :
2023

Abstract

In this paper, we propose a robust aggregation method for federated learning (FL) that can effectively tackle malicious Byzantine attacks. At each user, model parameter is firstly updated by multiple steps, which is adjustable over iterations, and then pushed to the aggregation center directly. This decreases the number of interactions between the aggregation center and users, allows each user to set training parameter in a flexible way, and reduces computation burden compared with existing works that need to combine multiple historical model parameters. At the aggregation center, geometric median is leveraged to combine the received model parameters from each user. Rigorous proof shows that zero optimality gap is achieved by our proposed method with linear convergence, as long as the fraction of Byzantine attackers is below half. Numerical results verify the effectiveness of our proposed method.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.10427
Document Type :
Working Paper