Back to Search Start Over

Evaluating Uncertainty Quantification in Medical Image Segmentation: A Multi-Dataset, Multi-Algorithm Study.

Authors :
Jalal, Nyaz
Śliwińska, Małgorzata
Wojciechowski, Wadim
Kucybała, Iwona
Rozynek, Miłosz
Krupa, Kamil
Matusik, Patrycja
Jarczewski, Jarosław
Tabor, Zbisław
Source :
Applied Sciences (2076-3417); Nov2024, Vol. 14 Issue 21, p10020, 25p
Publication Year :
2024

Abstract

Deep learning is revolutionizing various scientific fields, with medical applications at the forefront. One key focus is automating image segmentation, a process crucial in many clinical services. However, medical images are often ambiguous and challenging even for experts. To address this, reliable models need to quantify their uncertainty, allowing physicians to understand the model's confidence in its segmentation. This paper explores how the performance and uncertainty of a model are influenced by the number of annotations per input sample. We examine the effects of both single and multiple manual annotations on various deep learning architectures. To tackle this question, we employ three widely recognized deep learning architectures and evaluate them across four publicly available datasets. Furthermore, we explore the effects of dropout rates on Monte Carlo models by examining uncertainty models with dropout rates of 20%, 40%, 60%, and 80%. Subsequently, we evaluate the models using various measurement metrics. The findings reveal that the influence of multiple annotations varies significantly depending on the datasets. Additionally, we observe that the dropout rate has minimal or no impact on the model's performance unless there is a substantial loss of training data, primarily evident in the 80% dropout rate scenario. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20763417
Volume :
14
Issue :
21
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
180783033
Full Text :
https://doi.org/10.3390/app142110020