Back to Search
Start Over
Scalable Bayesian regression in high dimensions with multiple data sources
- Publication Year :
- 2017
-
Abstract
- Applications of high-dimensional regression often involve multiple sources or types of covariates. We propose methodology for this setting, emphasizing the "wide data" regime with large total dimensionality p and sample size n<<p. We focus on a flexible ridge-type prior with shrinkage levels that are specific to each data type or source and that are set automatically by empirical Bayes. All estimation, including setting of shrinkage levels, is formulated mainly in terms of inner product matrices of size n x n. This renders computation efficient in the wide data regime and allows scaling to problems with millions of features. Furthermore, the proposed procedures are free of user-set tuning parameters. We show how sparsity can be achieved by post-processing of the Bayesian output via constrained minimization of a certain Kullback-Leibler divergence. This yields sparse solutions with adaptive, source-specific shrinkage, including a closed-form variant that scales to very large p. We present empirical results from a simulation study based on real data and a case study in Alzheimer's disease involving millions of features and multiple data sources.
- Subjects :
- Statistics - Methodology
Statistics - Computation
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1710.00596
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1080/10618600.2019.1624294