1. Preemptive Scheduling for Distributed Machine Learning Jobs in Edge-Cloud Networks.
- Author
-
Wang, Ne, Zhou, Ruiting, Jiao, Lei, Zhang, Renli, Li, Bo, and Li, Zongpeng
- Subjects
MACHINE learning ,COMPUTER scheduling ,ASSIGNMENT problems (Programming) ,SMART cities ,SCHEDULING ,EDGE computing ,ONLINE algorithms ,TRAIN schedules - Abstract
Recent advances in 5G and edge computing enable rapid development and deployment of edge-cloud systems, which are ideal for delay-sensitive machine learning (ML) applications such as autonomous driving and smart city. Distributed ML jobs often need to train a large model with enormous datasets, which can only be handled by deploying a distributed set of workers in an edge-cloud system. One common approach is to employ a parameter server (PS) architecture, in which training is carried out at multiple workers, while PSs are used for aggregation and model updates. In this architecture, one of the fundamental challenges is how to dispatch ML jobs to workers and PSs such that the average job completion time (JCT) can be minimized. In this work, we propose a novel online preemptive scheduling framework to decide the location and the execution time window of concurrent workers and PSs upon each job arrival. Specifically, our proposed scheduling framework consists of: i) a job dispatching and scheduling algorithm that assigns each ML job to workers and decides the schedule to train each data chunk; ii) a PS assignment algorithm that determines the placement of PS. We prove theoretically that our proposed algorithm is $D_{max}(1+1/\epsilon)$ -competitive with $(1 + \epsilon)$ -speed augmentation, where $D_{max}$ is the maximal number of data chunks in any job. Extensive testbed experiments and trace-driven simulations show that our algorithm can reduce the average JCT by up to 30% compared with state-of-the-art baselines. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF