1. Joint Power Control and Channel Allocation for Interference Mitigation Based on Reinforcement Learning
- Author
-
Yuan Xing, Guofeng Zhao, Chuan Xu, Yong Li, Han Zhenzhen, and Shui Yu
- Subjects
reinforcement learning ,Mathematical optimization ,General Computer Science ,Computer science ,Throughput ,02 engineering and technology ,0203 mechanical engineering ,Interference (communication) ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Wireless ,General Materials Science ,channel allocation ,Throughput (business) ,throughput ,Computer Science::Information Theory ,Channel allocation schemes ,business.industry ,General Engineering ,020302 automobile design & engineering ,020206 networking & telecommunications ,Transmitter power output ,power control ,Channel state information ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Interference ,business ,lcsh:TK1-9971 ,Power control ,Communication channel - Abstract
© 2013 IEEE. In dense Wireless Local Area Networks (WLANs), high-density Access Points (APs) bring severe interference that seriously affects the experience of users, resulting in lower throughput and poor connection quality. Due to the heavy computation workload raised by the sizable networking systems and the difficulty in estimating instantaneous Channel State Information (CSI), existing works are hard to solve interference problem. In this paper, we propose a Joint Power control and Channel allocation based on Reinforcement Learning (JPCRL) algorithm combining with statistical CSI to reduce interference adaptively. Firstly, we analyze the correlation between transmit power and channel, and formulate the interference optimization as a Mixed Integer Nonlinear Programming (MINLP) problem. Secondly, we use the statistical CSI method to take the power and channel state as the state and action space, the overall throughput increment as the reward function of Q-learning, and obtain the optimal joint optimization strategy through off-line training. Moreover, for the periodic reinforcement learning process leading to resource consumption, we design an event-driven mechanism of Q-learning, which triggers online learning to refresh the optimal policy by event-driven condition and the consumption of computing resources can be reduced. The evaluation results show that the proposed algorithm can effectively improve the throughput compared with the existing scheme.
- Published
- 2019