1. Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors
- Author
-
Pedro Miguel Sanchez Sanchez, Alberto Huertas Celdran, Timo Schenk, Adrian Lars Benjamin Iten, Gerome Bovet, Gregorio Martinez Perez, Burkhard Stiller, and University of Zurich
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,Computer Science - Artificial Intelligence ,10009 Department of Informatics ,Sensors ,2208 Electrical and Electronic Engineering ,Data models ,Crowdsensing ,Behavioral science ,000 Computer science, knowledge & systems ,Fingerprint recognition ,Artificial Intelligence (cs.AI) ,1700 General Computer Science ,Electrical and Electronic Engineering ,Sensor phenomena and characterization ,Robustness ,Cryptography and Security (cs.CR) - Abstract
Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting cyberattacks targeting data managed by resource-constrained spectrum sensors. However, the amount of data needed to train models and the privacy concerns of such scenarios limit the applicability of centralized ML/DL-based approaches. Federated learning (FL) addresses these limitations by creating federated and privacy-preserving models. However, FL is vulnerable to malicious participants, and the impact of adversarial attacks on federated models detecting spectrum sensing data falsification (SSDF) attacks on spectrum sensors has not been studied. To address this challenge, the first contribution of this work is the creation of a novel dataset suitable for FL and modeling the behavior (usage of CPU, memory, or file system, among others) of resource-constrained spectrum sensors affected by different SSDF attacks. The second contribution is a pool of experiments analyzing and comparing the robustness of federated models according to i) three families of spectrum sensors, ii) eight SSDF attacks, iii) four scenarios dealing with unsupervised (anomaly detection) and supervised (binary classification) federated models, iv) up to 33% of malicious participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms to increase the models robustness.
- Published
- 2022