Back to Search Start Over

DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning

Authors :
Luo, Kangyang
Wang, Shuai
Fu, Yexuan
Shao, Renrong
Li, Xiang
Lan, Yunshi
Gao, Ming
Shu, Jinlong
Publication Year :
2024

Abstract

Federated Learning (FL) is a distributed machine learning scheme in which clients jointly participate in the collaborative training of a global model by sharing model information rather than their private datasets. In light of concerns associated with communication and privacy, one-shot FL with a single communication round has emerged as a de facto promising solution. However, existing one-shot FL methods either require public datasets, focus on model homogeneous settings, or distill limited knowledge from local models, making it difficult or even impractical to train a robust global model. To address these limitations, we propose a new data-free dual-generator adversarial distillation method (namely DFDG) for one-shot FL, which can explore a broader local models' training space via training dual generators. DFDG is executed in an adversarial manner and comprises two parts: dual-generator training and dual-model distillation. In dual-generator training, we delve into each generator concerning fidelity, transferability and diversity to ensure its utility, and additionally tailor the cross-divergence loss to lessen the overlap of dual generators' output spaces. In dual-model distillation, the trained dual generators work together to provide the training data for updates of the global model. At last, our extensive experiments on various image classification tasks show that DFDG achieves significant performance gains in accuracy compared to SOTA baselines.<br />Comment: Accepted by ICDM2024 main conference (long paper). arXiv admin note: substantial text overlap with arXiv:2309.13546

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.07734
Document Type :
Working Paper