Back to Search
Start Over
Preventing Image Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key.
- Source :
- Procedia Computer Science; 2023, Vol. 225, p2723-2732, 10p
- Publication Year :
- 2023
-
Abstract
- Recent studies have uncovered security issues with most of the federated learning models. One common false assumption in the federated learning model is that participants are the attacker and would not use polluted data. This vulnerability enables attackers to train their models using polluted data and then send the polluted updates to the training server for aggregation, potentially poisoning the overall model. In such a setting, it is challenging for an edge server to thoroughly inspect the data used for model training and supervise any edge device. This study evaluates the vulnerabilities present in federated learning and explores various types of attacks that can occur. This paper presents a robust prevention scheme to address these vulnerabilities. The proposed prevention scheme enables federated learning servers to monitor participants actively in real time and identify infected individuals by introducing an encrypted verification scheme. The paper outlines the protocol design of this prevention scheme and presents experimental results that demonstrate its effectiveness. [ABSTRACT FROM AUTHOR]
- Subjects :
- FEDERATED learning
MACHINE learning
IMAGE encryption
Subjects
Details
- Language :
- English
- ISSN :
- 18770509
- Volume :
- 225
- Database :
- Supplemental Index
- Journal :
- Procedia Computer Science
- Publication Type :
- Academic Journal
- Accession number :
- 174059316
- Full Text :
- https://doi.org/10.1016/j.procs.2023.10.264