Back to Search Start Over

Energy Efficient Task Scheduling in Fog Environment using Deep Reinforcement Learning Approach

Authors :
Ansar Yasar
Shashank Swarup
Elhadi M. Shakshuki
Source :
FNC/MobiSPC
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

The users of cloud span to several types of tasks for various purposes, such as users who need to accomplish tasks that utilize cloud based on as Infrastructure as a Service. These tasks are usually of high latency. There are some tasks that require immediate response. That is, ultra-low latency tasks like IoT device requirements. It is not feasible to always depend on a far cloud datacenter. Unlike traditional cloud computing, it is possible to place edge and fog nodes can be placed close to the IoT devices provides noticeable reduction in latency. The emerging fog computing technology is characterized by ultra-low latency response, which benefits several time-sensitive services and applications. Nodes in fog are deployed in less centralized. In a fog layer, the computing equipment dedicates parts of its limited resources to process IoT application tasks. Therefore, efficient utilization of computing resources is essential and requires an optimal and intelligent strategy for task scheduling. This paper focuses on scheduling IoT tasks in a fog-based environment with the aim to minimizing energy, cost, and service delay. Towards this end, a deep reinforcement learning based algorithm named Clipped Double Deep Q-learning using target networks and experience replay techniques is proposed. To ensure there is no lag in using resources optimally, a parallel queueing is utilized. One of the important factors in cloud (and fog) computing research is to address the problem of long waiting time of the task in the virtual machine queue. This paper proposes a dual queue method.

Details

ISSN :
18770509
Volume :
191
Database :
OpenAIRE
Journal :
Procedia Computer Science
Accession number :
edsair.doi...........3916c2fb172c4b55e1638032229bee95