Ilwoo Park, Seung Seog Han, Sung Eun Chang, Seong Hwan Kim, Gyeong Hun Park, Myoung Shin Kim, Jung Im Na, Ju Hee Lee, Woohyung Lim, Ik Jun Moon, and Keewon Kim
Background The diagnostic performance of convolutional neural networks (CNNs) for diagnosing several types of skin neoplasms has been demonstrated as comparable with that of dermatologists using clinical photography. However, the generalizability should be demonstrated using a large-scale external dataset that includes most types of skin neoplasms. In this study, the performance of a neural network algorithm was compared with that of dermatologists in both real-world practice and experimental settings. Methods and findings To demonstrate generalizability, the skin cancer detection algorithm (https://rcnn.modelderm.com) developed in our previous study was used without modification. We conducted a retrospective study with all single lesion biopsied cases (43 disorders; 40,331 clinical images from 10,426 cases: 1,222 malignant cases and 9,204 benign cases); mean age (standard deviation [SD], 52.1 [18.3]; 4,701 men [45.1%]) were obtained from the Department of Dermatology, Severance Hospital in Seoul, Korea between January 1, 2008 and March 31, 2019. Using the external validation dataset, the predictions of the algorithm were compared with the clinical diagnoses of 65 attending physicians who had recorded the clinical diagnoses with thorough examinations in real-world practice. In addition, the results obtained by the algorithm for the data of randomly selected batches of 30 patients were compared with those obtained by 44 dermatologists in experimental settings; the dermatologists were only provided with multiple images of each lesion, without clinical information. With regard to the determination of malignancy, the area under the curve (AUC) achieved by the algorithm was 0.863 (95% confidence interval [CI] 0.852–0.875), when unprocessed clinical photographs were used. The sensitivity and specificity of the algorithm at the predefined high-specificity threshold were 62.7% (95% CI 59.9–65.1) and 90.0% (95% CI 89.4–90.6), respectively. Furthermore, the sensitivity and specificity of the first clinical impression of 65 attending physicians were 70.2% and 95.6%, respectively, which were superior to those of the algorithm (McNemar test; p < 0.0001). The positive and negative predictive values of the algorithm were 45.4% (CI 43.7–47.3) and 94.8% (CI 94.4–95.2), respectively, whereas those of the first clinical impression were 68.1% and 96.0%, respectively. In the reader test conducted using images corresponding to batches of 30 patients, the sensitivity and specificity of the algorithm at the predefined threshold were 66.9% (95% CI 57.7–76.0) and 87.4% (95% CI 82.5–92.2), respectively. Furthermore, the sensitivity and specificity derived from the first impression of 44 of the participants were 65.8% (95% CI 55.7–75.9) and 85.7% (95% CI 82.4–88.9), respectively, which are values comparable with those of the algorithm (Wilcoxon signed-rank test; p = 0.607 and 0.097). Limitations of this study include the exclusive use of high-quality clinical photographs taken in hospitals and the lack of ethnic diversity in the study population. Conclusions Our algorithm could diagnose skin tumors with nearly the same accuracy as a dermatologist when the diagnosis was performed solely with photographs. However, as a result of limited data relevancy, the performance was inferior to that of actual medical examination. To achieve more accurate predictive diagnoses, clinical information should be integrated with imaging information., Seung Seog Han and colleagues compare the performance of a neural network algorithm with that of dermatologists in diagnosis of skin neoplasms., Author summary Why was this study done? The diagnostic performance of artificial intelligence based on deep learning algorithms has been demonstrated to be superior to or at least comparable with that of dermatologists. However, the difference in diagnostic efficiency between algorithms and dermatologists was determined using experimental reader tests with limited clinical information related to the photographed skin abnormalities. Most studies performed internal validation, which indicates that both the training and validation images were selected from the same source. In addition, only a small number of disorders have been validated in the previous studies. Thus, practical limitations and biases have complicated translation to actual practices. What did the researchers do and find? The performance of the neural network algorithm was compared with that of standard dermatologic practice for diagnosing almost all types of skin neoplasms on a large scale. The algorithm could successfully screen malignancy, without lesion preselection by a dermatologist. Under experimental settings, in which only images were provided for diagnosis, the performance of the algorithm was comparable with that of the 44 dermatologists who performed the reader test. However, the performance of the algorithm was inferior to that of the attending physicians who actually consulted with patients. This highlights the value of clinical data, in addition to visual findings, for accurate diagnosis of cutaneous neoplasms. What do these findings mean? Given photographs of abnormal skin findings, the algorithm can work ceaselessly to determine the need for dermatologic consultation at a performance level comparable with that of dermatologists. To further improve the algorithm’s performance, metadata, such as past medical history, should be integrated with the clinical images.