Back to Search Start Over

Offline Policy Evaluation and Optimization under Confounding

Authors :
Kausik, Chinmaya
Lu, Yangyi
Tan, Kevin
Makar, Maggie
Wang, Yixin
Tewari, Ambuj
Publication Year :
2022

Abstract

Evaluating and optimizing policies in the presence of unobserved confounders is a problem of growing interest in offline reinforcement learning. Using conventional methods for offline RL in the presence of confounding can not only lead to poor decisions and poor policies, but also have disastrous effects in critical applications such as healthcare and education. We map out the landscape of offline policy evaluation for confounded MDPs, distinguishing assumptions on confounding based on whether they are memoryless and on their effect on the data-collection policies. We characterize settings where consistent value estimates are provably not achievable, and provide algorithms with guarantees to instead estimate lower bounds on the value. When consistent estimates are achievable, we provide algorithms for value estimation with sample complexity guarantees. We also present new algorithms for offline policy improvement and prove local convergence guarantees. Finally, we experimentally evaluate our algorithms on both a gridworld environment and a simulated healthcare setting of managing sepsis patients. In gridworld, our model-based method provides tighter lower bounds than existing methods, while in the sepsis simulator, our methods significantly outperform confounder-oblivious benchmarks.<br />Comment: Overhauled terminology and presentation, strengthened presentation of results

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.16583
Document Type :
Working Paper