Back to Search
Start Over
Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation
- Publication Year :
- 2019
-
Abstract
- In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks. For this, we propose a modified knowledge distillation procedure that achieves state-of-the-art uncertainty estimates both for in and out-of-distribution samples. Our contributions include a) demonstrating and adapting to distillation's regularization effect b) proposing a novel target teacher distribution c) a simple augmentation procedure to improve out-of-distribution uncertainty estimates d) shedding light on the distillation procedure through comprehensive set of experiments.<br />Comment: Submitted at the ICML 2019 Workshop on Uncertainty & Robustness in Deep Learning(poster & spotlight talk)
- Subjects :
- Computer Science - Machine Learning
Statistics - Machine Learning
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1906.05419
- Document Type :
- Working Paper