1. Stable and Efficient Policy Evaluation
- Author
-
Lyu, Daoming, Liu, Bo, Geist, Matthieu, Dong, Wen, Biaz, Saad, and Wang, Qi
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Policy evaluation algorithms are essential to reinforcement learning due to their ability to predict the performance of a policy. However, there are two long-standing issues lying in this prediction problem that need to be tackled: off-policy stability and on-policy efficiency. The conventional temporal difference (TD) algorithm is known to perform very well in the on-policy setting, yet is not off-policy stable. On the other hand, the gradient TD and emphatic TD algorithms are off-policy stable, but are not on-policy efficient. This paper introduces novel algorithms that are both off-policy stable and on-policy efficient by using the oblique projection method. The empirical experimental results on various domains validate the effectiveness of the proposed approach., Comment: IEEE Transactions on Neural Networks and Learning Systems (IEEE-TNNLS). arXiv admin note: text overlap with arXiv:1704.05147
- Published
- 2020