Back to Search
Start Over
Empirical Q-Value Iteration
- Source :
- Stochastic Systems. 11:1-18
- Publication Year :
- 2021
- Publisher :
- Institute for Operations Research and the Management Sciences (INFORMS), 2021.
-
Abstract
- We propose a new simple and natural algorithm for learning the optimal [Formula: see text]-value function of a discounted-cost Markov decision process (MDP) when the transition kernels are unknown. Unlike the classical learning algorithms for MDPs, such as [Formula: see text]-learning and actor-critic algorithms, this algorithm does not depend on a stochastic approximation-based method. We show that our algorithm, which we call the empirical [Formula: see text]-value iteration algorithm, converges to the optimal [Formula: see text]-value function. We also give a rate of convergence or a nonasymptotic sample complexity bound and show that an asynchronous (or online) version of the algorithm will also work. Preliminary experimental results suggest a faster rate of convergence to a ballpark estimate for our algorithm compared with stochastic approximation-based algorithms.
- Subjects :
- FOS: Computer and information sciences
Statistics and Probability
Computer Science - Machine Learning
021103 operations research
Q value
0211 other engineering and technologies
02 engineering and technology
Function (mathematics)
Management Science and Operations Research
01 natural sciences
Machine Learning (cs.LG)
Dynamic programming
010104 statistics & probability
Empirical research
Optimization and Control (math.OC)
Simple (abstract algebra)
Modeling and Simulation
FOS: Mathematics
Applied mathematics
Markov decision process
0101 mathematics
Statistics, Probability and Uncertainty
Mathematics - Optimization and Control
Mathematics
Subjects
Details
- ISSN :
- 19465238
- Volume :
- 11
- Database :
- OpenAIRE
- Journal :
- Stochastic Systems
- Accession number :
- edsair.doi.dedup.....84c9c9c0497bdffb9c7d91be7eab2f1d