Back to Search Start Over

Deep Transformer Language Models for Arabic Text Summarization: A Comparison Study.

Authors :
Chouikhi, Hasna
Alsuhaibani, Mohammed
Source :
Applied Sciences (2076-3417); Dec2022, Vol. 12 Issue 23, p11944, 14p
Publication Year :
2022

Abstract

Large text documents are sometimes challenging to understand and time-consuming to extract vital information from. These issues are addressed by automatic text summarizing techniques, which condense lengthy texts while preserving their key information. Thus, the development of automatic summarization systems capable of fulfilling the ever-increasing demands of textual data becomes of utmost importance. It is even more vital with complex natural languages. This study explores five State-Of-The-Art (SOTA) Arabic deep Transformer-based Language Models (TLMs) in the task of text summarization by adapting various text summarization datasets dedicated to Arabic. A comparison against deep learning and machine learning-based baseline models has also been conducted. Experimental results reveal the superiority of TLMs, specifically the PEAGASUS family, against the baseline approaches, with an average F1-score of 90% on several benchmark datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20763417
Volume :
12
Issue :
23
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
160713264
Full Text :
https://doi.org/10.3390/app122311944