Back to Search Start Over

Nonstationary Reinforcement Learning: The Blessing of (More) Optimism.

Authors :
Cheung, Wang Chi
Simchi-Levi, David
Zhu, Ruihao
Source :
Management Science; Oct2023, Vol. 69 Issue 10, p5722-5739, 18p
Publication Year :
2023

Abstract

Motivated by operations research applications, such as inventory control and real-time bidding, we consider undiscounted reinforcement learning in Markov decision processes under model uncertainty and temporal drifts. In this setting, both the latent reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets. We first develop the sliding window upper confidence bound for reinforcement learning with confidence-widening (SWUCRL2-CW) algorithm and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the bandit-over-reinforcement learning algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound but in a parameter-free manner (i.e., without knowing the variation budgets). Finally, we conduct numerical experiments to show that our proposed algorithms achieve superior empirical performance compared with existing algorithms. Notably, under nonstationarity, historical data samples may falsely indicate that state transition rarely happens. This thus presents a significant challenge when one tries to apply the conventional optimism in the face of uncertainty principle to achieve a low dynamic regret bound. We overcome this challenge by proposing a novel confidence-widening technique that incorporates additional optimism into our learning algorithms. To extend our theoretical findings, we demonstrate, in the context of single-item inventory control with lost sales, fixed cost, and zero lead time, how one can leverage special structures on the state transition distributions to achieve improved dynamic regret bound in time-varying demand environments. This paper was accepted by J. George Shanthikumar, data science. Funding: The authors acknowledge support from the Massachusetts Institute of Technology (MIT) Data Science Laboratory and the MIT–IBM partnership in artificial intelligence. W. C. Cheung acknowledges support from the Singapore Ministry of Education [Tier 2 Grant MOE-T2EP20121-0012]. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2023.4704. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00251909
Volume :
69
Issue :
10
Database :
Complementary Index
Journal :
Management Science
Publication Type :
Academic Journal
Accession number :
173037913
Full Text :
https://doi.org/10.1287/mnsc.2023.4704