Back to Search Start Over

Low-Resource Named Entity Recognition via the Pre-Training Model.

Authors :
Chen, Siqi
Pei, Yijie
Ke, Zunwang
Silamu, Wushour
Awrejcewicz, Jan
Source :
Symmetry (20738994); May2021, Vol. 13 Issue 5, p786, 1p
Publication Year :
2021

Abstract

Named entity recognition (NER) is an important task in the processing of natural language, which needs to determine entity boundaries and classify them into pre-defined categories. For low-resource languages, most state-of-the-art systems require tens of thousands of annotated sentences to obtain high performance. However, there is minimal annotated data available about Uyghur and Hungarian (UH languages) NER tasks. There are also specificities in each task—differences in words and word order across languages make it a challenging problem. In this paper, we present an effective solution to providing a meaningful and easy-to-use feature extractor for named entity recognition tasks: fine-tuning the pre-trained language model. Therefore, we propose a fine-tuning method for a low-resource language model, which constructs a fine-tuning dataset through data augmentation; then the dataset of a high-resource language is added; and finally the cross-language pre-trained model is fine-tuned on this dataset. In addition, we propose an attention-based fine-tuning strategy that uses symmetry to better select relevant semantic and syntactic information from pre-trained language models and apply these symmetry features to name entity recognition tasks. We evaluated our approach on Uyghur and Hungarian datasets, which showed wonderful performance compared to some strong baselines. We close with an overview of the available resources for named entity recognition and some of the open research questions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20738994
Volume :
13
Issue :
5
Database :
Complementary Index
Journal :
Symmetry (20738994)
Publication Type :
Academic Journal
Accession number :
150499296
Full Text :
https://doi.org/10.3390/sym13050786