Back to Search
Start Over
QU-BraTS: MICCAI BraTS 2020 challenge on quantifying uncertainty in brain tumor segmentation: analysis of ranking scores and benchmarking results
- Publication Year :
- 2022
-
Abstract
- Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) chal- lenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher per- centage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmen- tation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.<br />Research reported in this publication was partly supported by the Informatics Technology for Cancer Research (ITCR) program of the National Cancer Institute (NCI) of the National Institutes of Health (NIH), under award numbers NIH/NCI/ITCR:U01CA242871 and NIH/NCI/ITCR:U24CA189523. It was also partly supported by the National Institute of Neurological Disorders and Stroke (NINDS) of the NIH, under award number NIH/NINDS:R01NS042645, The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH. This work was supported by a Canadian Natural Science and Engineering Research Council (NSERC) Collaborative Research and Development Grant (CRDPJ 505357 - 16), Synaptive Medical, and the Canada Institute for Advanced Research (CIFAR) Artificial Intelligence Chairs program.<br />Peer Reviewed<br />Document signat per 92 autors/autores: Raghav Mehta1 , Angelos Filos2 , Ujjwal Baid3,4,5 , Chiharu Sako3,4 , Richard McKinley6 , Michael Rebsamen6 , Katrin D¨atwyler6,53, Raphael Meier54, Piotr Radojewski6 , Gowtham Krishnan Murugesan7 , Sahil Nalawade7 , Chandan Ganesh7 , Ben Wagner7 , Fang F. Yu7 , Baowei Fei8 , Ananth J. Madhuranthakam7,9 , Joseph A. Maldjian7,9 , Laura Daza10, Catalina Gómez10, Pablo Arbeláez10, Chengliang Dai11, Shuo Wang11, Hadrien Raynaud11, Yuanhan Mo11, Elsa Angelini12, Yike Guo11, Wenjia Bai11,13, Subhashis Banerjee14,15,16, Linmin Pei17, Murat AK17, Sarahi Rosas-González18, Illyess Zemmoura18,52, Clovis Tauber18 , Minh H. Vu19, Tufve Nyholm19, Tommy L¨ofstedt20, Laura Mora Ballestar21, Veronica Vilaplana21, Hugh McHugh22,23, Gonzalo Maso Talou24, Alan Wang22,24, Jay Patel25,26, Ken Chang25,26, Katharina Hoebel25,26, Mishka Gidwani25, Nishanth Arun25, Sharut Gupta25 , Mehak Aggarwal25, Praveer Singh25, Elizabeth R. Gerstner25, Jayashree Kalpathy-Cramer25 , Nicolas Boutry27, Alexis Huard27, Lasitha Vidyaratne28, Md Monibor Rahman28, Khan M. Iftekharuddin28, Joseph Chazalon29, Elodie Puybareau29, Guillaume Tochon29, Jun Ma30 , Mariano Cabezas31, Xavier Llado31, Arnau Oliver31, Liliana Valencia31, Sergi Valverde31 , Mehdi Amian32, Mohammadreza Soltaninejad33, Andriy Myronenko34, Ali Hatamizadeh34 , Xue Feng35, Quan Dou35, Nicholas Tustison36, Craig Meyer35,36, Nisarg A. Shah37, Sanjay Talbar38, Marc-Andr Weber39, Abhishek Mahajan48, Andras Jakab47, Roland Wiest6,46 Hassan M. Fathallah-Shaykh45, Arash Nazeri40, Mikhail Milchenko140,44, Daniel Marcus40,44 , Aikaterini Kotrotsou43, Rivka Colen43, John Freymann41,42, Justin Kirby41,42, Christos Davatzikos3,4 , Bjoern Menze49,50, Spyridon Bakas∗3,4,5 , Yarin Gal∗2 , Tal Arbel∗1,51 // 1Centre for Intelligent Machines (CIM), McGill University, Montreal, QC, Canada, 2Oxford Applied and Theoretical Machine Learning (OATML) Group, University of Oxford, Oxford, England, 3Center for Biomedical Image Comp<br />Postprint (published version)
Details
- Database :
- OAIster
- Notes :
- 54 p., application/pdf, English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1372979863
- Document Type :
- Electronic Resource