Back to Search
Start Over
Scalable Bayesian Regression in High Dimensions With Multiple Data Sources.
- Source :
-
Journal of Computational & Graphical Statistics . Jan-Mar2020, Vol. 29 Issue 1, p28-39. 12p. - Publication Year :
- 2020
-
Abstract
- Applications of high-dimensional regression often involve multiple sources or types of covariates. We propose methodology for this setting, emphasizing the "wide data" regime with large total dimensionality p and sample size n ≪ p . We focus on a flexible ridge-type prior with shrinkage levels that are specific to each data type or source and that are set automatically by empirical Bayes. All estimation, including setting of shrinkage levels, is formulated mainly in terms of inner product matrices of size n × n . This renders computation efficient in the wide data regime and allows scaling to problems with millions of features. Furthermore, the proposed procedures are free of user-set tuning parameters. We show how sparsity can be achieved by post-processing of the Bayesian output via constrained minimization of a certain Kullback–Leibler divergence. This yields sparse solutions with adaptive, source-specific shrinkage, including a closed-form variant that scales to very large p. We present empirical results from a simulation study based on real data and a case study in Alzheimer's disease involving millions of features and multiple data sources. for this article are available online. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ALZHEIMER'S disease
*MATRIX multiplications
Subjects
Details
- Language :
- English
- ISSN :
- 10618600
- Volume :
- 29
- Issue :
- 1
- Database :
- Academic Search Index
- Journal :
- Journal of Computational & Graphical Statistics
- Publication Type :
- Academic Journal
- Accession number :
- 142799598
- Full Text :
- https://doi.org/10.1080/10618600.2019.1624294