Comparing artificial intelligence algorithms to 157 German dermatologists: the melanoma classification benchmark

Background - Several recent publications have demonstrated the use of convolutional neural networks to classify images of melanoma at par with board-certified dermatologists. However, the non-availability of a public human benchmark restricts the comparability of the performance of these algorithms...

Full description

Saved in:
Bibliographic Details
Main Authors: Brinker, Titus Josef (Author) , Hekler, Achim (Author) , Hauschild, Axel (Author) , Berking, Carola (Author) , Schilling, Bastian (Author) , Enk, Alexander (Author) , Haferkamp, Sebastian (Author) , Karoglan, Ante (Author) , Kalle, Christof von (Author) , Weichenthal, Michael (Author) , Sattler, Elke (Author) , Schadendorf, Dirk (Author) , Gaiser, Maria (Author) , Klode, Joachim (Author) , Utikal, Jochen (Author)
Format: Article (Journal)
Language:English
Published: 22 February 2019
In: European journal of cancer
Year: 2019, Volume: 111, Pages: 30-37
ISSN:1879-0852
DOI:10.1016/j.ejca.2018.12.016
Online Access:Verlag, kostenfrei: https://doi.org/10.1016/j.ejca.2018.12.016
Verlag, Volltext: http://www.sciencedirect.com/science/article/pii/S0959804918315624
Get full text
Author Notes:Titus J. Brinker, Achim Hekler, Axel Hauschild, Carola Berking, Bastian Schilling, Alexander H. Enk, Sebastian Haferkamp, Ante Karoglan, Christof von Kalle, Michael Weichenthal, Elke Sattler, Dirk Schadendorf, Maria R. Gaiser, Joachim Klode, Jochen S. Utikal
Description
Summary:Background - Several recent publications have demonstrated the use of convolutional neural networks to classify images of melanoma at par with board-certified dermatologists. However, the non-availability of a public human benchmark restricts the comparability of the performance of these algorithms and thereby the technical progress in this field. - Methods - An electronic questionnaire was sent to dermatologists at 12 German university hospitals. Each questionnaire comprised 100 dermoscopic and 100 clinical images (80 nevi images and 20 biopsy-verified melanoma images, each), all open-source. The questionnaire recorded factors such as the years of experience in dermatology, performed skin checks, age, sex and the rank within the university hospital or the status as resident physician. For each image, the dermatologists were asked to provide a management decision (treat/biopsy lesion or reassure the patient). Main outcome measures were sensitivity, specificity and the receiver operating characteristics (ROC). - Results - Total 157 dermatologists assessed all 100 dermoscopic images with an overall sensitivity of 74.1%, specificity of 60.0% and an ROC of 0.67 (range = 0.538-0.769); 145 dermatologists assessed all 100 clinical images with an overall sensitivity of 89.4%, specificity of 64.4% and an ROC of 0.769 (range = 0.613-0.9). Results between test-sets were significantly different (P < 0.05) confirming the need for a standardised benchmark. - Conclusions - We present the first public melanoma classification benchmark for both non-dermoscopic and dermoscopic images for comparing artificial intelligence algorithms with diagnostic performance of 145 or 157 dermatologists. Melanoma Classification Benchmark should be considered as a reference standard for white-skinned Western populations in the field of binary algorithmic melanoma classification.
Item Description:Gesehen am 26.04.2019
Physical Description:Online Resource
ISSN:1879-0852
DOI:10.1016/j.ejca.2018.12.016