Back to Search Start Over

Annotation Errors and NER: A Study with OntoNotes 5.0

Authors :
Bernier-Colborne, Gabriel
Vajjala, Sowmya
Publication Year :
2024

Abstract

Named Entity Recognition (NER) is a well-studied problem in NLP. However, there is much less focus on studying NER datasets, compared to developing new NER models. In this paper, we employed three simple techniques to detect annotation errors in the OntoNotes 5.0 corpus for English NER, which is the largest available NER corpus for English. Our techniques corrected ~10% of the sentences in train/dev/test data. In terms of entity mentions, we corrected the span and/or type of ~8% of mentions in the dataset, while adding/deleting/splitting/merging a few more. These are large numbers of changes, considering the size of OntoNotes. We used three NER libraries to train, evaluate and compare the models trained with the original and the re-annotated datasets, which showed an average improvement of 1.23% in overall F-scores, with large (>10%) improvements for some of the entity types. While our annotation error detection methods are not exhaustive and there is some manual annotation effort involved, they are largely language agnostic and can be employed with other NER datasets, and other sequence labelling tasks.<br />Comment: Unpublished report. Originally submitted to LREC 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.19172
Document Type :
Working Paper