Back to Search Start Over

Adaptive deep reinforcement learning for non-stationary environments.

Authors :
Zhu, Jin
Wei, Yutong
Kang, Yu
Jiang, Xiaofeng
Dullerud, Geir E.
Source :
SCIENCE CHINA Information Sciences; Oct2022, Vol. 65 Issue 10, p1-15, 15p
Publication Year :
2022

Abstract

Deep reinforcement learning (DRL) is currently used to solve Markov decision process problems for which the environment is typically assumed to be stationary. In this paper, we propose an adaptive DRL method for non-stationary environments. First, we introduce model uncertainty and propose the self-adjusting deep Q-learning algorithm, which can achieve the rebalance of exploration and exploitation automatically as the environment changes. Second, we propose a feasible criterion to judge the appropriateness of parameter setting of deep Q-networks and minimize the misjudgment probability based on the large deviation principle (LDP). The effectiveness of the proposed adaptive DRL method is illustrated in terms of an advanced persistent threat (APT) attack simulation game. Experimental results show that compared with the classic deep Q-learning algorithms in non-stationary and stationary environments, the adaptive DRL method improves performance by at least 14.28% and 30.56%, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1674733X
Volume :
65
Issue :
10
Database :
Complementary Index
Journal :
SCIENCE CHINA Information Sciences
Publication Type :
Academic Journal
Accession number :
159484615
Full Text :
https://doi.org/10.1007/s11432-021-3347-8