Back to Search Start Over

Scalable Gaussian Process Hyperparameter Optimization via Coverage Regularization

Authors :
Wood, Killian
Dunton, Alec M.
Muyskens, Amanda
Priest, Benjamin W.
Publication Year :
2022

Abstract

Gaussian processes (GPs) are Bayesian non-parametric models popular in a variety of applications due to their accuracy and native uncertainty quantification (UQ). Tuning GP hyperparameters is critical to ensure the validity of prediction accuracy and uncertainty; uniquely estimating multiple hyperparameters in, e.g. the Matern kernel can also be a significant challenge. Moreover, training GPs on large-scale datasets is a highly active area of research: traditional maximum likelihood hyperparameter training requires quadratic memory to form the covariance matrix and has cubic training complexity. To address the scalable hyperparameter tuning problem, we present a novel algorithm which estimates the smoothness and length-scale parameters in the Matern kernel in order to improve robustness of the resulting prediction uncertainties. Using novel loss functions similar to those in conformal prediction algorithms in the computational framework provided by the hyperparameter estimation algorithm MuyGPs, we achieve improved UQ over leave-one-out likelihood maximization while maintaining a high degree of scalability as demonstrated in numerical experiments.<br />Comment: 4 pages content, 3 figures, 6 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.11280
Document Type :
Working Paper