Back to Search Start Over

Evading model poisoning attacks in federated learning by a long-short-term-memory-based approach.

Authors :
Arazzi, Marco
Lax, Gianluca
Nocera, Antonino
Source :
Integrated Computer-Aided Engineering. Dec2024, p1.
Publication Year :
2024

Abstract

Federated Learning is designed to build a global model from a set of local learning tasks carried out by several clients. Each client trains the global model on local data and sends back only the computed model updates. Although this approach preserves data privacy, several issues arise, and model poisoning is one of the most significant issues. According to this attack, a limited number of compromised clients cooperate to cause the corruption of the global model by sending back malicious model updates. A common countermeasure to model poisoning involves discarding model updates that differ from the majority more than a suitable threshold. However, several attacks still occur that elude this countermeasure, such as LIE attack, which aims to introduce an error in the model that is less than the threshold. In this paper, we propose a new approach to detect malicious updates that is based on the use of an LSTM network suitably built and trained. The experimental validation shows that our approach is able to disarm LIE and Fang attacks, which are the most effective in this context. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10692509
Database :
Academic Search Index
Journal :
Integrated Computer-Aided Engineering
Publication Type :
Academic Journal
Accession number :
181822361
Full Text :
https://doi.org/10.1177/10692509241301588