Back to Search Start Over

Policy Error Bounds for Model-Based Reinforcement Learning with Factored Linear Models

Authors :
Pires, Bernardo Ávila
Szepesvári, Csaba
Source :
JMLR W&CP 49: COLT 2016 Proceedings (2016) 1-31
Publication Year :
2016

Abstract

In this paper we study a model-based approach to calculating approximately optimal policies in Markovian Decision Processes. In particular, we derive novel bounds on the loss of using a policy derived from a factored linear model, a class of models which generalize numerous previous models out of those that come with strong computational guarantees. For the first time in the literature, we derive performance bounds for model-based techniques where the model inaccuracy is measured in weighted norms. Moreover, our bounds show a decreased sensitivity to the discount factor and, unlike similar bounds derived for other approaches, they are insensitive to measure mismatch. Similarly to previous works, our proofs are also based on contraction arguments, but with the main differences that we use carefully constructed norms building on Banach lattices, and the contraction property is only assumed for operators acting on "compressed" spaces, thus weakening previous assumptions, while strengthening previous results.<br />Comment: 30 pages. Corrected typos. Appears in JMLR Workshop and Conference Proceedings 49: Proceedings of the 29th Annual Conference on Learning Theory (COLT 2016)

Details

Database :
arXiv
Journal :
JMLR W&CP 49: COLT 2016 Proceedings (2016) 1-31
Publication Type :
Report
Accession number :
edsarx.1602.06346
Document Type :
Working Paper