Back to Search Start Over

The Impact of Data Distribution on Fairness and Robustness in Federated Learning

Authors :
Ozdayi, Mustafa Safa
Kantarcioglu, Murat
Publication Year :
2021

Abstract

Federated Learning (FL) is a distributed machine learning protocol that allows a set of agents to collaboratively train a model without sharing their datasets. This makes FL particularly suitable for settings where data privacy is desired. However, it has been observed that the performance of FL is closely related to the similarity of the local data distributions of agents. Particularly, as the data distributions of agents differ, the accuracy of the trained models drop. In this work, we look at how variations in local data distributions affect the fairness and the robustness properties of the trained models in addition to the accuracy. Our experimental results indicate that, the trained models exhibit higher bias, and become more susceptible to attacks as local data distributions differ. Importantly, the degradation in the fairness, and robustness can be much more severe than the accuracy. Therefore, we reveal that small variations that have little impact on the accuracy could still be important if the trained model is to be deployed in a fairness/security critical context.<br />Comment: Published at The Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications. arXiv admin note: text overlap with arXiv:2010.07427, arXiv:2007.03767

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2112.01274
Document Type :
Working Paper