Back to Search Start Over

Using leave-one-out cross-validation (LOO) in a multilevel regression and poststratification (MRP) workflow: A cautionary tale

Authors :
Kuh, Swen
Kennedy, Lauren
Chen, Qixuan
Gelman, Andrew
Publication Year :
2022

Abstract

In recent decades, multilevel regression and poststratification (MRP) has surged in popularity for population inference. However, the validity of the estimates can depend on details of the model, and there is currently little research on validation. We explore how leave-one-out cross-validation (LOO) can be used to compare Bayesian models for MRP. We investigate two approximate calculations of LOO, the Pareto smoothed importance sampling (PSIS-LOO) and a survey-weighted alternative (WTD-PSIS-LOO). Using two simulation designs, we examine how accurately these two criteria recover the correct ordering of model goodness at predicting population and small area level estimands. Focusing first on variable selection, we find that neither PSIS-LOO nor WTD-PSIS-LOO correctly recovers the models' order for an MRP population estimand (although both criteria correctly identify the best and worst model). When considering small-area estimation, the best model differs for different small areas, highlighting the complexity of MRP validation. When considering different priors, the models' order seems slightly better at smaller area levels. These findings suggest that while not terrible, PSIS-LOO-based ranking techniques may not be suitable to evaluate MRP as a method. We suggest this is due to the aggregation stage of MRP, where individual-level prediction errors average out. These results show that in practice, PSIS-LOO-based model validation tools need to be used with caution and might not convey the full story when validating MRP as a method.<br />Comment: 21 pages + 7 pages of appendix, 13 figures

Subjects

Subjects :
Statistics - Methodology

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.01773
Document Type :
Working Paper