Back to Search Start Over

Hydra: Preserving Ensemble Diversity for Model Distillation

Authors :
Tran, Linh
Veeling, Bastiaan S.
Roth, Kevin
Swiatkowski, Jakub
Dillon, Joshua V.
Snoek, Jasper
Mandt, Stephan
Salimans, Tim
Nowozin, Sebastian
Jenatton, Rodolphe
Publication Year :
2020

Abstract

Ensembles of models have been empirically shown to improve predictive performance and to yield robust measures of uncertainty. However, they are expensive in computation and memory. Therefore, recent research has focused on distilling ensembles into a single compact model, reducing the computational and memory burden of the ensemble while trying to preserve its predictive behavior. Most existing distillation formulations summarize the ensemble by capturing its average predictions. As a result, the diversity of the ensemble predictions, stemming from each member, is lost. Thus, the distilled model cannot provide a measure of uncertainty comparable to that of the original ensemble. To retain more faithfully the diversity of the ensemble, we propose a distillation method based on a single multi-headed neural network, which we refer to as Hydra. The shared body network learns a joint feature representation that enables each head to capture the predictive behavior of each ensemble member. We demonstrate that with a slight increase in parameter count, Hydra improves distillation performance on classification and regression settings while capturing the uncertainty behavior of the original ensemble over both in-domain and out-of-distribution tasks.<br />Comment: Accepted to ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2001.04694
Document Type :
Working Paper