Back to Search
Start Over
Scalable and memory-efficient sparse learning for classification with approximate Bayesian regularization priors.
- Source :
-
Neurocomputing . Oct2021, Vol. 457, p106-116. 11p. - Publication Year :
- 2021
-
Abstract
- Sparse Bayesian learning (SBL) provides state-of-the-art performance in accuracy, sparsity and probabilistic prediction for classification. In SBL, the regularization priors are automatically determined that avoids an exhaustive hyperparameter selection by cross-validation. However, scalability to large problems is a drawback of SBL because of the inversion of a potentially enormous covariance matrix for updating the regularization priors in every iteration. This paper develops an approximate SBL algorithm called ARP-SBL, where the regularization priors are approximated without inverting the covariance matrix. Therefore, our approach can easily scale up to problems with large data size or feature dimension. It alleviates the long training time and high memory complexity in SBL. Based on ARP-SBL, two scalable nonlinear SBL models: scalable relevance vector machine (ARP-RVM) and scalable sparse Bayesian extreme learning machine (ARP-SBELM) are developed for problems of large data size and large feature size respectively. Experiments on a variety of benchmarks have shown that the proposed models are with competitive accuracy compared to existing methods while i) converging faster; ii) requiring thousands of times less memory; iii) without exhaustive regularized hyperparameter selection; and iv) easily scaling up to large data size and high dimensional features. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09252312
- Volume :
- 457
- Database :
- Academic Search Index
- Journal :
- Neurocomputing
- Publication Type :
- Academic Journal
- Accession number :
- 152042170
- Full Text :
- https://doi.org/10.1016/j.neucom.2021.06.025