Back to Search Start Over

Empirical Q-Value Iteration

Authors :
Vivek S. Borkar
Dileep Kalathil
Rahul Jain
Source :
Stochastic Systems. 11:1-18
Publication Year :
2021
Publisher :
Institute for Operations Research and the Management Sciences (INFORMS), 2021.

Abstract

We propose a new simple and natural algorithm for learning the optimal [Formula: see text]-value function of a discounted-cost Markov decision process (MDP) when the transition kernels are unknown. Unlike the classical learning algorithms for MDPs, such as [Formula: see text]-learning and actor-critic algorithms, this algorithm does not depend on a stochastic approximation-based method. We show that our algorithm, which we call the empirical [Formula: see text]-value iteration algorithm, converges to the optimal [Formula: see text]-value function. We also give a rate of convergence or a nonasymptotic sample complexity bound and show that an asynchronous (or online) version of the algorithm will also work. Preliminary experimental results suggest a faster rate of convergence to a ballpark estimate for our algorithm compared with stochastic approximation-based algorithms.

Details

ISSN :
19465238
Volume :
11
Database :
OpenAIRE
Journal :
Stochastic Systems
Accession number :
edsair.doi.dedup.....84c9c9c0497bdffb9c7d91be7eab2f1d