Back to Search Start Over

Scalable Log Determinants for Gaussian Process Kernel Learning

Authors :
Dong, Kun
Eriksson, David
Nickisch, Hannes
Bindel, David
Wilson, Andrew Gordon
Dong, Kun
Eriksson, David
Nickisch, Hannes
Bindel, David
Wilson, Andrew Gordon
Publication Year :
2017

Abstract

For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes (GPs), one must compute a log determinant of an $n \times n$ positive definite matrix, and its derivatives - leading to prohibitive $\mathcal{O}(n^3)$ computations. We propose novel $\mathcal{O}(n)$ approaches to estimating these quantities from only fast matrix vector multiplications (MVMs). These stochastic approximations are based on Chebyshev, Lanczos, and surrogate models, and converge quickly even for kernel matrices that have challenging spectra. We leverage these approximations to develop a scalable Gaussian process approach to kernel learning. We find that Lanczos is generally superior to Chebyshev for kernel learning, and that a surrogate approach can be highly efficient and accurate with popular kernels.<br />Comment: Appears at Advances in Neural Information Processing Systems 30 (NIPS), 2017

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1106279391
Document Type :
Electronic Resource