1. Online QoS/QoE-Driven SFC Orchestration Leveraging a DRL Approach in SDN/NFV Enabled Networks.
- Author
-
Escheikh, Mohamed and Taktak, Wiem
- Subjects
REINFORCEMENT learning ,DEEP reinforcement learning ,SOFTWARE-defined networking ,STATISTICAL decision making ,5G networks - Abstract
The proliferation of the ever-increasing number of highly heterogeneous smart devices and the emerging of a wide range of diverse applications in 5G mobile network ecosystems impose to tackle new set of raising challenges related to agile and automated service orchestration and management. Fully leveraging key enablers technologies such as software defined network, network function virtualization and machine learning capabilities in such environment is of paramount importance to address service function chaining (SFC) orchestration issues according to user requirements and network constraints. To meet these challenges, we propose in this paper a deep reinforcement learning (DRL) approach to investigate online quality of experience (QoE)/quality of service (QoS) aware SFC orchestration problem. The objective is to fulfill intelligent, elastic and automated virtual network functions deployment optimizing QoE while respecting QoS constraints. We implement DRL approach through Double Deep-Q-Network algorithm. We investigate experimental simulations to apprehend agent behavior along a learning phase followed by a testing and evaluation phase for two physical substrate network scales. The testing phase is defined as the last 100 runs of the learning phase where agent reaches on average QoE threshold score ( Q o E T h - S c ). In a first set of experiments, we highlight the impact of hyper-parameters (Learning Rate (LR) and Batch Size (BS)) tuning on better solving sequential decision problem related to SFC orchestration for a given Q o E T h - S c . This investigation leads us to choose the more suitable pair (LR, BS) enabling acceptable learning quality. In a second set of experiments we examine DRL agent capacity to enhance learning quality while meeting performance-convergence compromise. This is achieved by progressively increasing Q o E T h - S c . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF