Back to Search Start Over

Assessing Inter-Annotator Agreement for Medical Image Segmentation

Authors :
Feng Yang
Ghada Zamzmi
Sandeep Angara
Sivaramakrishnan Rajaraman
Andre Aquilina
Zhiyun Xue
Stefan Jaeger
Emmanouil Papagiannakis
Sameer K. Antani
Source :
IEEE Access, Vol 11, Pp 21300-21312 (2023)
Publication Year :
2023
Publisher :
IEEE, 2023.

Abstract

Artificial Intelligence (AI)-based medical computer vision algorithm training and evaluations depend on annotations and labeling. However, variability between expert annotators introduces noise in training data that can adversely impact the performance of AI algorithms. This study aims to assess, illustrate and interpret the inter-annotator agreement among multiple expert annotators when segmenting the same lesion(s)/abnormalities on medical images. We propose the use of three metrics for the qualitative and quantitative assessment of inter-annotator agreement: 1) use of a common agreement heatmap and a ranking agreement heatmap; 2) use of the extended Cohen’s kappa and Fleiss’ kappa coefficients for a quantitative evaluation and interpretation of inter-annotator reliability; and 3) use of the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm, as a parallel step, to generate ground truth for training AI models and compute Intersection over Union (IoU), sensitivity, and specificity to assess the inter-annotator reliability and variability. Experiments are performed on two datasets, namely cervical colposcopy images from 30 patients and chest X-ray images from 336 tuberculosis (TB) patients, to demonstrate the consistency of inter-annotator reliability assessment and the importance of combining different metrics to avoid bias assessment.

Details

Language :
English
ISSN :
21693536
Volume :
11
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.776814b036a647198c131238f7593824
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2023.3249759