1. Adaptive Asynchronous Federated Learning in Resource-Constrained Edge Computing
- Author
-
Hongli Xu, Chen Qian, Jianchun Liu, Yang Xu, He Huang, Jinyang Huang, and Lun Wang
- Subjects
Resource (project management) ,Computer Networks and Communications ,Asynchronous communication ,Computer science ,Distributed computing ,Server ,Reinforcement learning ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Software ,Edge computing ,Data modeling ,Task (project management) - Abstract
Federated learning (FL) has been widely adopted to train machine learning models over massive data in edge computing. However, machine learning faces critical challenges, \eg, data imbalance, edge dynamics, and resource constraints, in edge computing.The existing FL solutions cannot well cope with data imbalance or edge dynamics, and may cause high resource cost. In this paper, we propose an adaptive asynchronous federated learning (AAFL) mechanism. To deal with edge dynamics, the parameter server will aggregate local updated models only from a certain fraction $\alpha$ of all edge nodes in each epoch. Moreover, the system can intelligently vary the number of local updated models for global model aggregation in different epochs with network situations. We then propose experience-driven algorithms based on deep reinforcement learning (DRL) to adaptively determine the optimal value of $\alpha$ in each epoch for two cases of AAFL, single learning task and multiple learning tasks, so as to achieve less completion time of training under resource constraints. Extensive experiments on the classical models and datasets show high effectiveness of the proposed algorithms. Specifically, AAFL can reduce the completion time by about 55\% and improve the learning accuracy by 18\% under resource constraints, compared with the state-of-the-art solutions.
- Published
- 2023