Back to Search Start Over

Interpretable deep Gaussian processes with moments

Authors :
Lu, Chi-Ken
Yang, Scott Cheng-Hsin
Hao, Xiaoran
Shafto, Patrick
Source :
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020
Publication Year :
2019

Abstract

Deep Gaussian Processes (DGPs) combine the expressiveness of Deep Neural Networks (DNNs) with quantified uncertainty of Gaussian Processes (GPs). Expressive power and intractable inference both result from the non-Gaussian distribution over composition functions. We propose interpretable DGP based on approximating DGP as a GP by calculating the exact moments, which additionally identify the heavy-tailed nature of some DGP distributions. Consequently, our approach admits interpretation as both NNs with specified activation functions and as a variational approximation to DGP. We identify the expressivity parameter of DGP and find non-local and non-stationary correlation from DGP composition. We provide general recipes for deriving the effective kernels for DGP of two, three, or infinitely many layers, composed of homogeneous or heterogeneous kernels. Results illustrate the expressiveness of our effective kernels through samples from the prior and inference on simulated and real data and demonstrate advantages of interpretability by analysis of analytic forms, and draw relations and equivalences across kernels.<br />Comment: Preprint with 12 pages and 3 figures. The updated version (Oct 9 2019) consider the second and fourth moments, inspecting the heavy-tailed nature of DGP distribution, justifying the validity of approximating DGP as GP. A connection with the expressivity parameter in Poole et al NIPS paper is also added. New reference and 4th moments of SC[]

Details

Database :
arXiv
Journal :
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020
Publication Type :
Report
Accession number :
edsarx.1905.10963
Document Type :
Working Paper