Back to Search
Start Over
Q-Learning With Uniformly Bounded Variance.
- Source :
-
IEEE Transactions on Automatic Control . Nov2022, Vol. 67 Issue 11, p5948-5963. 16p. - Publication Year :
- 2022
-
Abstract
- Sample complexity bounds are a common performance metric in the reinforcement learning literature. In the discounted cost, infinite horizon setting, all of the known bounds can be arbitrarily large, as the discount factor approaches unity. These results seem to imply that a very large number of samples is required to achieve an epsilon-optimal policy. The objective of the present work is to introduce a new class of algorithms that have sample complexity uniformly bounded over all discount factors. One may argue that this is impossible, due to a recent min–max lower bound. The explanation is that these prior bounds concern value function approximation and not policy approximation. We show that the asymptotic covariance of the tabular Q-learning algorithm with an optimized step-size sequence is a quadratic function of a factor that goes to infinity, as discount factor approaches 1; an essentially known result. The new relative Q-learning algorithm proposed here is shown to have asymptotic covariance that is uniformly bounded over all discount factors. [ABSTRACT FROM AUTHOR]
- Subjects :
- *FUNCTIONS of bounded variation
*STOCHASTIC control theory
Subjects
Details
- Language :
- English
- ISSN :
- 00189286
- Volume :
- 67
- Issue :
- 11
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Automatic Control
- Publication Type :
- Periodical
- Accession number :
- 160621643
- Full Text :
- https://doi.org/10.1109/TAC.2021.3133184