Back to Search
Start Over
A Double Penalty Model for Interpretability
- Publication Year :
- 2019
-
Abstract
- Modern statistical learning techniques have often emphasized prediction performance over interpretability, giving rise to "black box" models that may be difficult to understand, and to generalize to other settings. We conceptually divide a prediction model into interpretable and non-interpretable portions, as a means to produce models that are highly interpretable with little loss in performance. Implementation of the model is achieved by considering separability of the interpretable and non-interpretable portions, along with a doubly penalized procedure for model fitting. We specify conditions under which convergence of model estimation can be achieved via cyclic coordinate ascent, and the consistency of model estimation holds. We apply the methods to datasets for microbiome host trait prediction and a diabetes trait, and discuss practical tradeoff diagnostics to select models with high interpretability.
- Subjects :
- Statistics - Methodology
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1909.06263
- Document Type :
- Working Paper