Back to Search Start Over

Instance-Dependent Confidence and Early Stopping for Reinforcement Learning

Authors :
Khamaru, Koulik
Xia, Eric
Wainwright, Martin J.
Jordan, Michael I.
Publication Year :
2022

Abstract

Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure. Such problem-dependent behavior is not captured by worst-case analyses and has accordingly inspired a growing effort in obtaining instance-dependent guarantees and deriving instance-optimal algorithms for RL problems. This research has been carried out, however, primarily within the confines of theory, providing guarantees that explain \textit{ex post} the performance differences observed. A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice. We address the problem of obtaining sharp instance-dependent confidence regions for the policy evaluation problem and the optimal value estimation problem of an MDP, given access to an instance-optimal algorithm. As a consequence, we propose a data-dependent stopping rule for instance-optimal algorithms. The proposed stopping rule adapts to the instance-specific difficulty of the problem and allows for early termination for problems with favorable structure.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.08536
Document Type :
Working Paper