Back to Search Start Over

BERT Models for Arabic Text Classification: A Systematic Review

Authors :
Ali Saleh Alammary
Source :
Applied Sciences, Vol 12, Iss 11, p 5720 (2022)
Publication Year :
2022
Publisher :
MDPI AG, 2022.

Abstract

Bidirectional Encoder Representations from Transformers (BERT) has gained increasing attention from researchers and practitioners as it has proven to be an invaluable technique in natural languages processing. This is mainly due to its unique features, including its ability to predict words conditioned on both the left and the right context, and its ability to be pretrained using the plain text corpus that is enormously available on the web. As BERT gained more interest, more BERT models were introduced to support different languages, including Arabic. The current state of knowledge and practice in applying BERT models to Arabic text classification is limited. In an attempt to begin remedying this gap, this review synthesizes the different Arabic BERT models that have been applied to text classification. It investigates the differences between them and compares their performance. It also examines how effective they are compared to the original English BERT models. It concludes by offering insight into aspects that need further improvements and future work.

Details

Language :
English
ISSN :
20763417
Volume :
12
Issue :
11
Database :
Directory of Open Access Journals
Journal :
Applied Sciences
Publication Type :
Academic Journal
Accession number :
edsdoj.69d6e10be28843f2b5484bf21660b6da
Document Type :
article
Full Text :
https://doi.org/10.3390/app12115720