1. Robustness Against Data Integrity Attacks in Decentralized Federated Load Forecasting.
- Author
-
Shabbir, Attia, Manzoor, Habib Ullah, Manzoor, Muhmmand Naisr, Hussain, Sajjad, and Zoha, Ahmed
- Abstract
This study examines the impact of data integrity attacks on Federated Learning (FL) for load forecasting in smart grid systems, where privacy-sensitive data require robust management. While FL provides a privacy-preserving approach to distributed model training, it remains susceptible to attacks like data poisoning, which can impair model performance. We compare Centralized Federated Learning (CFL) and Decentralized Federated Learning (DFL), using line, ring and bus topologies, under adversarial conditions. Employing a three-layer Artificial Neural Network (ANN) with substation-level datasets ( A P E h o u r l y , P J M E h o u r l y , and C O M E D h o u r l y ), we evaluate the system's resilience in the absence of anomaly detection. Results indicate that DFL significantly outperforms CFL in attack resistance, achieving Mean Absolute Percentage Errors (MAPEs) of 0.48%, 4.29% and 0.702% across datasets, compared to the CFL MAPEs of 6.07%, 18.49% and 10.19%. This demonstrates the potential of DFL as a resilient, secure solution for load forecasting in smart grids, minimizing dependence on anomaly detection to maintain data integrity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF