Back to Search
Start Over
FedFed: Feature Distillation against Data Heterogeneity in Federated Learning
- Source :
- NeurIPS 2023
- Publication Year :
- 2023
-
Abstract
- Federated learning (FL) typically faces data heterogeneity, i.e., distribution shifting among clients. Sharing clients' information has shown great potentiality in mitigating data heterogeneity, yet incurs a dilemma in preserving privacy and promoting model performance. To alleviate the dilemma, we raise a fundamental question: \textit{Is it possible to share partial features in the data to tackle data heterogeneity?} In this work, we give an affirmative answer to this question by proposing a novel approach called {\textbf{Fed}erated \textbf{Fe}ature \textbf{d}istillation} (FedFed). Specifically, FedFed partitions data into performance-sensitive features (i.e., greatly contributing to model performance) and performance-robust features (i.e., limitedly contributing to model performance). The performance-sensitive features are globally shared to mitigate data heterogeneity, while the performance-robust features are kept locally. FedFed enables clients to train models over local and shared data. Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance.<br />Comment: 32 pages
- Subjects :
- Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Journal :
- NeurIPS 2023
- Publication Type :
- Report
- Accession number :
- edsarx.2310.05077
- Document Type :
- Working Paper