1. Representation and Reinforcement Learning for Task Scheduling in Edge Computing
- Author
-
Yongjian You, Zhiqing Tang, Xiaojie Zhou, Weijia Jia, and Wenmian Yang
- Subjects
Service-level agreement ,Information Systems and Management ,Theoretical computer science ,Computer science ,Server ,Dimensionality reduction ,Reinforcement learning ,Energy consumption ,Feature learning ,Edge computing ,Information Systems ,Scheduling (computing) - Abstract
Recently, many deep reinforcement learning (DRL)-based task scheduling algorithms have been widely used in edge computing (EC) to reduce energy consumption. Unlike the existing algorithms considering fixed and fewer edge nodes (servers) and tasks, in this paper, a representation model with a DRL based algorithm is proposed to adapt the dynamic change of nodes and tasks and solve the dimensional disaster in DRL caused by a massive scale. Specifically, 1) we apply the representation learning models to describe the different nodes and tasks in EC, i.e., nodes and tasks are mapped to corresponding vector sub-spaces to reduce the dimensions and store the vector space efficiently. 2) With the space after dimensionality reduction, a DRL-based algorithm is employed to learn the vector representations of nodes and tasks and make scheduling decisions. 3) The experiments are conducted with the real-world data set, and the results show that the proposed representation model with DRL-based algorithm outperforms the baselines 18.04% and 9.94% on average regarding energy consumption and service level agreement violation (SLAV), respectively.
- Published
- 2022
- Full Text
- View/download PDF