Back to Search Start Over

Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning.

Authors :
Tian, Yuchen
Zhang, Weizhe
Simpson, Andrew
Liu, Yang
Jiang, Zoe Lin
Source :
Computer Journal. Mar2023, Vol. 66 Issue 3, p711-726. 16p.
Publication Year :
2023

Abstract

Federated learning (FL), a variant of distributed learning (DL), supports the training of a shared model without accessing private data from different sources. Despite its benefits with regard to privacy preservation, FL's distributed nature and privacy constraints make it vulnerable to data poisoning attacks. Existing defenses, primarily designed for DL, are typically not well adapted to FL. In this paper, we study such attacks and defenses. In doing so, we start from the perspective of DL and then give consideration to a real-world FL scenario, with the aim being to explore the requisites of a desirable defense in FL. Our study shows that (i) the batch size used in each training round affects the effectiveness of defenses in DL, (ii) the defenses investigated are somewhat effective and moderately influenced by batch size in FL settings and (iii) the non-IID data makes it more difficult to defend against data poisoning attacks in FL. Based on the findings, we discuss the key challenges and possible directions in defending against such attacks in FL. In addition, we propose detect and suppress the potential outliers(DSPO), a defense against data poisoning attacks in FL scenarios. Our results show that DSPO outperforms other defenses in several cases. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00104620
Volume :
66
Issue :
3
Database :
Academic Search Index
Journal :
Computer Journal
Publication Type :
Academic Journal
Accession number :
162503606
Full Text :
https://doi.org/10.1093/comjnl/bxab192