51. Generative Quantile Regression with Variability Penalty
- Author
-
Wang, Shijie, Shin, Minsuk, and Bai, Ray
- Subjects
Statistics - Methodology - Abstract
Quantile regression and conditional density estimation can reveal structure that is missed by mean regression, such as multimodality and skewness. In this paper, we introduce a deep learning generative model for joint quantile estimation called Penalized Generative Quantile Regression (PGQR). Our approach simultaneously generates samples from many random quantile levels, allowing us to infer the conditional distribution of a response variable given a set of covariates. Our method employs a novel variability penalty to avoid the problem of vanishing variability, or memorization, in deep generative models. Further, we introduce a new family of partial monotonic neural networks (PMNN) to circumvent the problem of crossing quantile curves. A major benefit of PGQR is that it can be fit using a single optimization, thus bypassing the need to repeatedly train the model at multiple quantile levels or use computationally expensive cross-validation to tune the penalty parameter. We illustrate the efficacy of PGQR through extensive simulation studies and analysis of real datasets. Code to implement our method is available at https://github.com/shijiew97/PGQR., Comment: 41 pages, 17 figures, 4 tables. New version includes more simulation studies, comparisons to competing methods, illustrations, real data applications, and discussion of the vanishing variability phenomenon and overparameterization in deep learning. The figures are higher-resolution, and the presentation and writing have improved
- Published
- 2023