1. Comparing artificial intelligence algorithms to 157 German dermatologists: the melanoma classification benchmark.
- Author
-
Brinker, Titus J., Hekler, Achim, Hauschild, Axel, Berking, Carola, Schilling, Bastian, Enk, Alexander H., Haferkamp, Sebastian, Karoglan, Ante, von Kalle, Christof, Weichenthal, Michael, Sattler, Elke, Schadendorf, Dirk, Gaiser, Maria R., Klode, Joachim, and Utikal, Jochen S.
- Subjects
- *
ACADEMIC medical centers , *ALGORITHMS , *ARTIFICIAL intelligence , *BENCHMARKING (Management) , *DERMATOLOGISTS , *DIAGNOSTIC imaging , *COMPUTERS in medicine , *MELANOMA , *ARTIFICIAL neural networks , *QUESTIONNAIRES , *RECEIVER operating characteristic curves - Abstract
Abstract Background Several recent publications have demonstrated the use of convolutional neural networks to classify images of melanoma at par with board-certified dermatologists. However, the non-availability of a public human benchmark restricts the comparability of the performance of these algorithms and thereby the technical progress in this field. Methods An electronic questionnaire was sent to dermatologists at 12 German university hospitals. Each questionnaire comprised 100 dermoscopic and 100 clinical images (80 nevi images and 20 biopsy-verified melanoma images, each), all open-source. The questionnaire recorded factors such as the years of experience in dermatology, performed skin checks, age, sex and the rank within the university hospital or the status as resident physician. For each image, the dermatologists were asked to provide a management decision (treat/biopsy lesion or reassure the patient). Main outcome measures were sensitivity, specificity and the receiver operating characteristics (ROC). Results Total 157 dermatologists assessed all 100 dermoscopic images with an overall sensitivity of 74.1%, specificity of 60.0% and an ROC of 0.67 (range = 0.538–0.769); 145 dermatologists assessed all 100 clinical images with an overall sensitivity of 89.4%, specificity of 64.4% and an ROC of 0.769 (range = 0.613–0.9). Results between test-sets were significantly different (P < 0.05) confirming the need for a standardised benchmark. Conclusions We present the first public melanoma classification benchmark for both non-dermoscopic and dermoscopic images for comparing artificial intelligence algorithms with diagnostic performance of 145 or 157 dermatologists. Melanoma Classification Benchmark should be considered as a reference standard for white-skinned Western populations in the field of binary algorithmic melanoma classification. Highlights • This paper provides the first open access melanoma classification benchmark for both non-dermoscopic and dermoscopic images. • Algorithms can now be easily compared to the performance of dermatologists in terms of sensitivity, specificity and ROC. • The melanoma benchmark allows comparability between algorithms of different publications and provides a new reference standard. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF