Back to Search Start Over

Statistical generalization performance guarantee for meta-learning with data dependent prior

Authors :
Jie Lu
Zheng Yan
Tianyu Liu
Guangquan Zhang
Source :
Neurocomputing. 465:391-405
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

Meta-learning aims to leverage experience from previous tasks to achieve an effective and fast adaptation ability when encountering new tasks. However, it is unclear how the generalization property applies to new tasks. Probably approximately correct (PAC) Bayes bound theory provides a theoretical framework to analyze the generalization performance for meta-learning with an explicit numerical generalization error upper bound. A tighter upper bound may achieve better generalization performance. However, for the PAC-Bayes meta-learning bound, the prior distribution is selected randomly which results in poor generalization performance. In this paper, we derive three novel generalization error upper bounds for meta-learning based on the PAC-Bayes relative entropy bound. Furthermore, in order to avoid randomly prior distribution, based on the empirical risk minimization (ERM) method, a data-dependent prior for the PAC-Bayes meta-learning bound algorithm is developed and the sample complexity and computational complexity are analyzed. The experiments illustrate that the proposed three PAC-Bayes bounds for meta-learning achieve a competitive generalization guarantee, and the extended PAC-Bayes bound with a data-dependent prior can achieve rapid convergence ability.

Details

ISSN :
09252312
Volume :
465
Database :
OpenAIRE
Journal :
Neurocomputing
Accession number :
edsair.doi.dedup.....6e3d0794a2d73b062f4eff6c77cbd60b