Back to Search Start Over

Online Hyperparameter Meta-Learning with Hypergradient Distillation

Authors :
Lee, Hae Beom
Lee, Hayeon
Shin, Jaewoong
Yang, Eunho
Hospedales, Timothy
Hwang, Sung Ju
Publication Year :
2021
Publisher :
arXiv, 2021.

Abstract

Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization, which can be considered as hyperparameters. Although such hyperparameters can be optimized using the existing gradient-based hyperparameter optimization (HO) methods, they suffer from the following issues. Unrolled differentiation methods do not scale well to high-dimensional hyperparameters or horizon length, Implicit Function Theorem (IFT) based methods are restrictive for online optimization, and short horizon approximations suffer from short horizon bias. In this work, we propose a novel HO method that can overcome these limitations, by approximating the second-order term with knowledge distillation. Specifically, we parameterize a single Jacobian-vector product (JVP) for each HO step and minimize the distance from the true second-order term. Our method allows online optimization and also is scalable to the hyperparameter dimension and the horizon length. We demonstrate the effectiveness of our method on two different meta-learning methods and three benchmark datasets.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....0814b4094ad32a2eff80eb02287e1b23
Full Text :
https://doi.org/10.48550/arxiv.2110.02508