Back to Search Start Over

Robustness to Incorrect Models and Data-Driven Learning in Average-Cost Optimal Stochastic Control

Authors :
Kara, Ali Devran
Raginsky, Maxim
Yuksel, Serdar
Publication Year :
2020

Abstract

We study continuity and robustness properties of infinite-horizon average expected cost problems with respect to (controlled) transition kernels, and applications of these results to the problem of robustness of control policies designed for approximate models applied to actual systems. We show that sufficient conditions presented in the literature for discounted-cost problems are in general not sufficient to ensure robustness for average-cost problems. However, we show that the average optimal cost is continuous in the convergences of controlled transition kernel models where convergence of models entails (i) continuous weak convergence in state and actions, and (ii) continuous setwise convergence in the actions for every fixed state variable, in addition to either uniform ergodicity or some regularity conditions. We establish that the mismatch error due to the application of a control policy designed for an incorrectly estimated model to the true model decreases to zero as the incorrect model approaches the true model under the stated convergence criteria. Our findings significantly relax related studies in the literature which have primarily considered the more restrictive total variation convergence criteria. Applications to robustness to models estimated through empirical data (where almost sure weak convergence criterion typically holds, but stronger criteria do not) are studied and conditions for asymptotic robustness to data-driven learning are established.<br />Comment: Presented at Conference on Decision and Control 2019. arXiv admin note: text overlap with arXiv:1803.06046

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2003.05769
Document Type :
Working Paper