1. Leveraging Factored Action Spaces for Off-Policy Evaluation
- Author
-
Rebello, Aaman, Tang, Shengpu, Wiens, Jenna, and Parbhoo, Sonali
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,J.3 ,Computer Science - Artificial Intelligence ,Statistics - Machine Learning ,I.2.6 ,62D20 (Primary) 62M05, 60J10, 62D05, 62P10 (Secondary) ,I.2.8 ,G.3 ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Off-policy evaluation (OPE) aims to estimate the benefit of following a counterfactual sequence of actions, given data collected from executed sequences. However, existing OPE estimators often exhibit high bias and high variance in problems involving large, combinatorial action spaces. We investigate how to mitigate this issue using factored action spaces i.e. expressing each action as a combination of independent sub-actions from smaller action spaces. This approach facilitates a finer-grained analysis of how actions differ in their effects. In this work, we propose a new family of "decomposed" importance sampling (IS) estimators based on factored action spaces. Given certain assumptions on the underlying problem structure, we prove that the decomposed IS estimators have less variance than their original non-decomposed versions, while preserving the property of zero bias. Through simulations, we empirically verify our theoretical results, probing the validity of various assumptions. Provided with a technique that can derive the action space factorisation for a given problem, our work shows that OPE can be improved "for free" by utilising this inherent problem structure., Main paper: 8 pages, 7 figures. Appendix: 30 pages, 17 figures. Accepted at ICML 2023 Workshop on Counterfactuals in Minds and Machines, Honolulu, Hawaii, USA. Camera ready version
- Published
- 2023