Back to Search Start Over

How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?

Authors :
Wang, Su
Sahay, Rajeev
Brinton, Christopher G.
Publication Year :
2023

Abstract

There has been recent interest in leveraging federated learning (FL) for radio signal classification tasks. In FL, model parameters are periodically communicated from participating devices, training on their own local datasets, to a central server which aggregates them into a global model. While FL has privacy/security advantages due to raw data not leaving the devices, it is still susceptible to several adversarial attacks. In this work, we reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions. In this capacity, we develop an attack framework in which compromised FL devices perturb their local datasets using adversarial evasion attacks. As a result, the training process of the global model significantly degrades on in-distribution signals (i.e., signals received over channels with identical distributions at each edge device). We compare our work to previously proposed FL attacks and reveal that as few as one adversarial device operating with a low-powered perturbation under our attack framework can induce the potent model poisoning attack to the global classifier. Moreover, we find that more devices partaking in adversarial poisoning will proportionally degrade the classification performance.<br />Comment: 6 pages, Accepted to IEEE ICC 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.08866
Document Type :
Working Paper