Back to Search Start Over

Uncertainty Aware Training to Improve Deep Learning Model Calibration for Classification of Cardiac MR Images

Authors :
Dawood, Tareen
Chen, Chen
Sidhua, Baldeep S.
Ruijsink, Bram
Goulda, Justin
Porter, Bradley
Elliott, Mark K.
Mehta, Vishal
Rinaldi, Christopher A.
Puyol-Anton, Esther
Razavi, Reza
King, Andrew P.
Publication Year :
2023

Abstract

Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy artificial intelligence (AI) models beyond conventional reporting of performance metrics. When considering their role in a clinical decision support setting, AI classification models should ideally avoid confident wrong predictions and maximise the confidence of correct predictions. Models that do this are said to be well-calibrated with regard to confidence. However, relatively little attention has been paid to how to improve calibration when training these models, i.e., to make the training strategy uncertainty-aware. In this work we evaluate three novel uncertainty-aware training strategies comparing against two state-of-the-art approaches. We analyse performance on two different clinical applications: cardiac resynchronisation therapy (CRT) response prediction and coronary artery disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The best-performing model in terms of both classification accuracy and the most common calibration measure, expected calibration error (ECE) was the Confidence Weight method, a novel approach that weights the loss of samples to explicitly penalise confident incorrect predictions. The method reduced the ECE by 17% for CRT response prediction and by 22% for CAD diagnosis when compared to a baseline classifier in which no uncertainty-aware strategy was included. In both applications, as well as reducing the ECE there was a slight increase in accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD diagnosis respectively. However, our analysis showed a lack of consistency in terms of optimal models when using different calibration measures. This indicates the need for careful consideration of performance metrics when training and selecting models for complex high-risk applications in healthcare.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.15141
Document Type :
Working Paper
Full Text :
https://doi.org/10.1016/j.media.2023.102861