Back to Search Start Over

Ordinary Differential Equation Methods For Markov Decision Processes and Application to Kullback-Leibler Control Cost

Authors :
Bušić, Ana
Meyn, Sean
Publication Year :
2016

Abstract

A new approach to computation of optimal policies for MDP (Markov decision process) models is introduced. The main idea is to solve not one, but an entire family of MDPs, parameterized by a weighting factor $\zeta$ that appears in the one-step reward function. For an MDP with $d$ states, the family of value functions $\{ h^*_\zeta : \zeta\in\Re\}$ is the solution to an ODE, $$ \frac{d}{d\zeta} h^*_\zeta = {\cal V}(h^*_\zeta) $$ where the vector field ${\cal V}\colon\Re^d\to\Re^d$ has a simple form, based on a matrix inverse. This general methodology is applied to a family of average-cost optimal control models in which the one-step reward function is defined by Kullback-Leibler divergence. The motivation for this reward function in prior work is computation: The solution to the MDP can be expressed in terms of the Perron-Frobenius eigenvector for an associated positive matrix. The drawback with this approach is that no hard constraints on the control are permitted. It is shown here that it is possible to extend this framework to model randomness from nature that cannot be modified by the controller. Perron-Frobenius theory is no longer applicable -- the resulting dynamic programming equations appear as complex as a completely unstructured MDP model. Despite this apparent complexity, it is shown that this class of MDPs admits a solution via this new ODE technique. This approach is new and practical even for the simpler problem in which randomness from nature is absent.<br />Comment: Submitted to SICON

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1605.04591
Document Type :
Working Paper
Full Text :
https://doi.org/10.1137/16M1100204