Back to Search
Start Over
Differential Temporal Difference Learning.
- Source :
- IEEE Transactions on Automatic Control; Oct2021, Vol. 66 Issue 10, p4652-4667, 16p
- Publication Year :
- 2021
-
Abstract
- Value functions derived from Markov decision processes arise as a central component of algorithms as well as performance metrics in many statistics and engineering applications of machine learning. Computation of the solution to the associated Bellman equations is challenging in most practical cases of interest. A popular class of approximation techniques, known as temporal difference (TD) learning algorithms, are an important subclass of general reinforcement learning methods. The algorithms introduced in this article are intended to resolve two well-known issues with TD-learning algorithms. Their slow convergence due to very high central limit theorem variance, and the fact that, for the problem of computing the relative value function, consistent algorithms exist only in special cases. First we show that the gradients of these value functions admit a representation that lends itself to algorithm design. Based on this result, a new class of differential TD-learning algorithms is introduced. For Markovian models on Euclidean space with smooth dynamics, the algorithms are shown to be consistent under general conditions. Numerical results show dramatic variance reduction in comparison to standard methods. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00189286
- Volume :
- 66
- Issue :
- 10
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Automatic Control
- Publication Type :
- Periodical
- Accession number :
- 153732256
- Full Text :
- https://doi.org/10.1109/TAC.2020.3033417