1. Cloud Task Scheduling Based on Proximal Policy Optimization Algorithm for Lowering Energy Consumption of Data Center.
- Author
-
Yongquan Yang, Cuihua He, Bo Yin, Zhiqiang Wei, and Bowei Hong
- Subjects
SERVER farms (Computer network management) ,ENERGY consumption ,REINFORCEMENT learning ,MATHEMATICAL optimization ,MACHINE learning ,HEURISTIC algorithms - Abstract
As a part of cloud computing technology, algorithms for cloud task scheduling place an important influence on the area of cloud computing in data centers. In our earlier work, we proposed DeepEnergyJS, which was designed based on the original version of the policy gradient and reinforcement learning algorithm. We verified its effectiveness through simulation experiments. In this study, we used the Proximal Policy Optimization (PPO) algorithm to update DeepEnergyJS to DeepEnergyJSV2.0. First, we verify the convergence of the PPO algorithm on the dataset of Alibaba Cluster Data V2018. Then we contrast it with reinforcement learning algorithm in terms of convergence rate, converged value, and stability. The results indicate that PPO performed better in training and test data sets compared with reinforcement learning algorithm, as well as other general heuristic algorithms, such as First Fit, Random, and Tetris. DeepEnergyJSV2.0 achieves better energy efficiency than DeepEnergyJS by about 7.814%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF