Back to Search Start Over

Multibranch multilevel federated learning for a better feature extraction and a plug-and-play dynamic-adjusting double flow personalization approach.

Authors :
Ren, Maoye
Yu, Xinhai
Source :
Applied Intelligence; Jun2023, Vol. 53 Issue 11, p13956-13971, 16p
Publication Year :
2023

Abstract

Federated learning (FL) is an emerging technique used to preserve the privacy of users' data in training by a distributed machine learning approach. Previously, a client-edge-cloud hierarchical federated learning (HierFL) system was proposed to reduce the long communication latency between the client and the cloud. However, HierFL performs very poorly at extracting better features from the data in the client, which is also a common drawback of traditional FL architectures. HierFL also constrains the its hierarchy to server-edge-client three levels which is very restrictive. This paper proposes that the specifically designed FL architecture can naturally have an excellent effect on feature extraction. Specifically, this paper proposes a multibranch multilevel federated learning (MBMLFL) framework to perform a better job at feature extraction, and each branch and level has its own self specific effect. The proposed framework is also friendlier as well as further enhances the privacy. By extending FedAvg, we design an M2DCFedAvg algorithm for the framework to optimize its objective function distributedly, and by conducting numerous experiments and analyses, we propose a general epoch selection principle for all the FL methods: the 1 d T -n principle. Experiments with various data distributions are performed on MBMLFL to comprehensively research its characteristics. Moreover, to complete the personalization ability of our framework, we propose a plug-and-play dynamic-adjusting double flow personalization approach (DADFPA) for our MBMLFL method, which will further enhance the ability of the MBMLFL, such that it comprehensively and adaptively satisfies the generalization and personalization needs of clients. Experiments show that the MBMLFL and DADFPA can improve their baselines by an average of 5.26% and 2.24% points, which demonstrates their excellent performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0924669X
Volume :
53
Issue :
11
Database :
Complementary Index
Journal :
Applied Intelligence
Publication Type :
Academic Journal
Accession number :
164005511
Full Text :
https://doi.org/10.1007/s10489-022-04193-w