Back to Search Start Over

[Untitled]

Authors :
Satinder Singh
Tommi S. Jaakkola
Michael L. Littman
Csaba Szepesvári
Source :
Machine Learning. 38:287-308
Publication Year :
2000
Publisher :
Springer Science and Business Media LLC, 2000.

Abstract

An important application of reinforcement learning (RL) is to finite-state control problems and one of the most difficult problems in learning for control is balancing the exploration/exploitation tradeoff. Existing theoretical results for RL give very little guidance on reasonable ways to perform exploration. In this paper, we examine the convergence of single-step on-policy RL algorithms for control. On-policy algorithms cannot separate exploration from learning and therefore must confront the exploration problem directly. We prove convergence results for several related on-policy algorithms with both decaying exploration and persistent exploration. We also provide examples of exploration strategies that can be followed during learning that result in convergence to both optimal values and optimal policies.

Details

ISSN :
08856125
Volume :
38
Database :
OpenAIRE
Journal :
Machine Learning
Accession number :
edsair.doi...........710ba9c5f7f14be8deb2adc2f5b77981
Full Text :
https://doi.org/10.1023/a:1007678930559