Back to Search Start Over

Biomedical image analysis competitions:The state of current participation practice

Authors :
Eisenmann, Matthias
Reinke, Annika
Weru, Vivienn
Tizabi, Minu Dietlinde
Isensee, Fabian
Adler, Tim J.
Godau, Patrick
Cheplygina, Veronika
Kozubek, Michal
Ali, Sharib
Gupta, Anubha
Kybic, Jan
Noble, Alison
Solórzano, Carlos Ortiz de
Pachade, Samiksha
Petitjean, Caroline
Sage, Daniel
Wei, Donglai
Wilden, Elizabeth
Alapatt, Deepak
Andrearczyk, Vincent
Baid, Ujjwal
Bakas, Spyridon
Balu, Niranjan
Bano, Sophia
Bawa, Vivek Singh
Bernal, Jorge
Bodenstedt, Sebastian
Casella, Alessandro
Choi, Jinwook
Commowick, Olivier
Daum, Marie
Depeursinge, Adrien
Dorent, Reuben
Egger, Jan
Eichhorn, Hannah
Engelhardt, Sandy
Ganz, Melanie
Girard, Gabriel
Hansen, Lasse
Heinrich, Mattias
Heller, Nicholas
Hering, Alessa
Huaulmé, Arnaud
Kim, Hyunjeong
Thambawita, Vajira
Zhao, Xin
Lund, Christina B.
Ren, Jintao
Yang, Lin
Eisenmann, Matthias
Reinke, Annika
Weru, Vivienn
Tizabi, Minu Dietlinde
Isensee, Fabian
Adler, Tim J.
Godau, Patrick
Cheplygina, Veronika
Kozubek, Michal
Ali, Sharib
Gupta, Anubha
Kybic, Jan
Noble, Alison
Solórzano, Carlos Ortiz de
Pachade, Samiksha
Petitjean, Caroline
Sage, Daniel
Wei, Donglai
Wilden, Elizabeth
Alapatt, Deepak
Andrearczyk, Vincent
Baid, Ujjwal
Bakas, Spyridon
Balu, Niranjan
Bano, Sophia
Bawa, Vivek Singh
Bernal, Jorge
Bodenstedt, Sebastian
Casella, Alessandro
Choi, Jinwook
Commowick, Olivier
Daum, Marie
Depeursinge, Adrien
Dorent, Reuben
Egger, Jan
Eichhorn, Hannah
Engelhardt, Sandy
Ganz, Melanie
Girard, Gabriel
Hansen, Lasse
Heinrich, Mattias
Heller, Nicholas
Hering, Alessa
Huaulmé, Arnaud
Kim, Hyunjeong
Thambawita, Vajira
Zhao, Xin
Lund, Christina B.
Ren, Jintao
Yang, Lin
Source :
Eisenmann , M , Reinke , A , Weru , V , Tizabi , M D , Isensee , F , Adler , T J , Godau , P , Cheplygina , V , Kozubek , M , Ali , S , Gupta , A , Kybic , J , Noble , A , Solórzano , C O D , Pachade , S , Petitjean , C , Sage , D , Wei , D , Wilden , E , Alapatt , D , Andrearczyk , V , Baid , U , Bakas , S , Balu , N , Bano , S , Bawa , V S , Bernal , J , Bodenstedt , S , Casella , A , Choi , J , Commowick , O , Daum , M , Depeursinge , A , Dorent , R , Egger , J , Eichhorn , H , Engelhardt , S , Ganz , M , Girard , G , Hansen , L , Heinrich , M , Heller , N , Hering , A , Huaulmé , A , Kim , H , Thambawita , V , Zhao , X , Lund , C B , Ren , J , Yang , L & MICCAI challenge collaboration 2022 ' Biomedical image analysis competitions : The state of current participation practice ' arXiv.org , pp. 1_30 .
Publication Year :
2022

Abstract

The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants’ expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing step<br />The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.

Details

Database :
OAIster
Journal :
Eisenmann , M , Reinke , A , Weru , V , Tizabi , M D , Isensee , F , Adler , T J , Godau , P , Cheplygina , V , Kozubek , M , Ali , S , Gupta , A , Kybic , J , Noble , A , Solórzano , C O D , Pachade , S , Petitjean , C , Sage , D , Wei , D , Wilden , E , Alapatt , D , Andrearczyk , V , Baid , U , Bakas , S , Balu , N , Bano , S , Bawa , V S , Bernal , J , Bodenstedt , S , Casella , A , Choi , J , Commowick , O , Daum , M , Depeursinge , A , Dorent , R , Egger , J , Eichhorn , H , Engelhardt , S , Ganz , M , Girard , G , Hansen , L , Heinrich , M , Heller , N , Hering , A , Huaulmé , A , Kim , H , Thambawita , V , Zhao , X , Lund , C B , Ren , J , Yang , L & MICCAI challenge collaboration 2022 ' Biomedical image analysis competitions : The state of current participation practice ' arXiv.org , pp. 1_30 .
Notes :
application/pdf, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1372662490
Document Type :
Electronic Resource