Back to Search Start Over

PBQ-Enhanced QUIC: QUIC with Deep Reinforcement Learning Congestion Control Mechanism.

Authors :
Zhang, Zhifei
Li, Shuo
Ge, Yiyang
Xiong, Ge
Zhang, Yu
Xiong, Ke
Source :
Entropy. Feb2023, Vol. 25 Issue 2, p294. 15p.
Publication Year :
2023

Abstract

Currently, the most widely used protocol for the transportation layer of computer networks for reliable transportation is the Transmission Control Protocol (TCP). However, TCP has some problems such as high handshake delay, head-of-line (HOL) blocking, and so on. To solve these problems, Google proposed the Quick User Datagram Protocol Internet Connection (QUIC) protocol, which supports 0-1 round-trip time (RTT) handshake, a congestion control algorithm configuration in user mode. So far, the QUIC protocol has been integrated with traditional congestion control algorithms, which are not efficient in numerous scenarios. To solve this problem, we propose an efficient congestion control mechanism on the basis of deep reinforcement learning (DRL), i.e., proximal bandwidth-delay quick optimization (PBQ) for QUIC, which combines traditional bottleneck bandwidth and round-trip propagation time (BBR) with proximal policy optimization (PPO). In PBQ, the PPO agent outputs the congestion window (CWnd) and improves itself according to network state, and the BBR specifies the pacing rate of the client. Then, we apply the presented PBQ to QUIC and form a new version of QUIC, i.e., PBQ-enhanced QUIC. The experimental results show that the proposed PBQ-enhanced QUIC achieves much better performance in both throughput and RTT than existing popular versions of QUIC, such as QUIC with Cubic and QUIC with BBR. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10994300
Volume :
25
Issue :
2
Database :
Academic Search Index
Journal :
Entropy
Publication Type :
Academic Journal
Accession number :
162117957
Full Text :
https://doi.org/10.3390/e25020294