Back to Search Start Over

A Probabilistic Re-Intepretation of Confidence Scores in Multi-Exit Models

Authors :
Jary Pomponi
Simone Scardapane
Aurelio Uncini
Source :
Entropy, Vol 24, Iss 1, p 1 (2021)
Publication Year :
2021
Publisher :
MDPI AG, 2021.

Abstract

In this paper, we propose a new approach to train a deep neural network with multiple intermediate auxiliary classifiers, branching from it. These ‘multi-exits’ models can be used to reduce the inference time by performing early exit on the intermediate branches, if the confidence of the prediction is higher than a threshold. They rely on the assumption that not all the samples require the same amount of processing to yield a good prediction. In this paper, we propose a way to train jointly all the branches of a multi-exit model without hyper-parameters, by weighting the predictions from each branch with a trained confidence score. Each confidence score is an approximation of the real one produced by the branch, and it is calculated and regularized while training the rest of the model. We evaluate our proposal on a set of image classification benchmarks, using different neural models and early-exit stopping criteria.

Details

Language :
English
ISSN :
10994300
Volume :
24
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Entropy
Publication Type :
Academic Journal
Accession number :
edsdoj.0bfeb80bee04472fa56ec10de2f528ab
Document Type :
article
Full Text :
https://doi.org/10.3390/e24010001