Back to Search Start Over

Personalized Federated Learning via Backbone Self-Distillation

Authors :
Wang, Pengju
Liu, Bochao
Zeng, Dan
Yan, Chenggang
Ge, Shiming
Publication Year :
2024

Abstract

In practical scenarios, federated learning frequently necessitates training personalized models for each client using heterogeneous data. This paper proposes a backbone self-distillation approach to facilitate personalized federated learning. In this approach, each client trains its local model and only sends the backbone weights to the server. These weights are then aggregated to create a global backbone, which is returned to each client for updating. However, the client's local backbone lacks personalization because of the common representation. To solve this problem, each client further performs backbone self-distillation by using the global backbone as a teacher and transferring knowledge to update the local backbone. This process involves learning two components: the shared backbone for common representation and the private head for local personalization, which enables effective global knowledge transfer. Extensive experiments and comparisons with 12 state-of-the-art approaches demonstrate the effectiveness of our approach.<br />Comment: Pubished in ACM MMAsia 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.15636
Document Type :
Working Paper