1. Enabling Robust DRL-Driven Networking Systems via Teacher-Student Learning
- Author
-
Xin Wang, Yuedong Xu, Lixiang Lin, Ying Zheng, Qingyang Duan, Tianqi Zhang, and Haoyu Chen
- Subjects
Artificial neural network ,Computer Networks and Communications ,Computer science ,business.industry ,Feature extraction ,Variance (accounting) ,Load balancing (computing) ,Machine learning ,computer.software_genre ,Robustness (computer science) ,Resource allocation ,Domain knowledge ,Reinforcement learning ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer - Abstract
The past few years have witnessed a surge of interest towards deep reinforcement learning (DRL) in computer networks. With extraordinary ability of feature extraction, DRL has the potential to re-engineer the fundamental resource allocation problems in networking without relying on pre-programmed models or assumptions about dynamic environments. However, such black-box systems suffer from poor robustness, showing high performance variance and poor tail performance. In this work, we propose a unified Teacher-Student learning framework that harnesses rich domain knowledge to improve robustness. The domain-specific algorithms, less performant but more trustable than DRL, play the role of teachers providing advice at critical states; the student neural network is steered to maximize the expected reward as usual and mimic the teacher’s advice meanwhile. The Teacher-Student method comprises of three modules where the confidence check module locates wrong decisions and risky decisions, the reward shaping module designs a new updating function to stimulate the learning of student network, and the prioritized experience replay module to effectively utilize the advised actions. We further implement our Teacher-Student framework in existing video streaming (Pensieve), load balancing (DeepLB), and TCP congestion control (Aurora). Experimental results manifest that the proposed approach reduces the performance standard deviation of DeepLB by 37%; it improves the 90th, 95th, and 99th tail performance of Pensieve by 7.6%, 8.8%, and 10.7% respectively; and it accelerates the growth rate of Aurora by 2x at the initial stage, and achieves a more stable performance in dynamic environments.
- Published
- 2022
- Full Text
- View/download PDF