Back to Search Start Over

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Authors :
Cao, Xiaoyu
Jia, Jinyuan
Zhang, Zaixi
Gong, Neil Zhenqiang
Cao, Xiaoyu
Jia, Jinyuan
Zhang, Zaixi
Gong, Neil Zhenqiang
Publication Year :
2022

Abstract

Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model via sending malicious model updates to the server. Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them. However, it is still an open challenge how to recover the global model from poisoning attacks after the malicious clients are detected. A naive solution is to remove the detected malicious clients and train a new global model from scratch, which incurs large cost that may be intolerable for resource-constrained clients such as smartphones and IoT devices. In this work, we propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients. Our key idea is that the server estimates the clients' model updates instead of asking the clients to compute and communicate them during the recovery process. In particular, the server stores the global models and clients' model updates in each round, when training the poisoned global model. During the recovery process, the server estimates a client's model update in each round using its stored historical information. Moreover, we further optimize FedRecover to recover a more accurate global model using warm-up, periodic correction, abnormality fixing, and final tuning strategies, in which the server asks the clients to compute and communicate their exact model updates. Theoretically, we show that the global model recovered by FedRecover is close to or the same as that recovered by train-from-scratch under some assumptions. Empirically, our evaluation on four datasets, three federated learning methods, as well as untargeted and targeted poisoning attacks (e.g., backdoor attacks) shows that FedRecover is both accurate and efficient.<br />Comment: To appear in IEEE S&P 2023

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381576045
Document Type :
Electronic Resource