Back to Search Start Over

Attention-based Transformer for Assamese Abstractive Text Summarization.

Authors :
Goutom, Pritom Jyoti
Baruah, Nomi
Sonowal, Paramananda
Source :
Procedia Computer Science; 2024, Vol. 235, p1097-1104, 8p
Publication Year :
2024

Abstract

The difficulty of accurately summarising Assamese text content is a significant barrier in natural language processing (NLP). Manually summarising lengthy Assamese texts is time-consuming and labor-intensive. As a result, automatic text summarization has developed as a critical NLP study topic. In this study, we integrate the Transformer and Self-Attention approaches to develop an abstract text summarization model. This Transformer-based technique uses self-attention approaches to successfully manage co-reference concerns in Assamese text, enhancing overall system understanding. This proposed approach improves the efficiency of text summarization greatly. We exhaustively evaluated the model using the Assamese dataset (AD-50), which contains human-produced summaries, to assess its performance. When compared to current state-of-the-art baseline models, our model outperformed them. On the AD-50 dataset, for example, our suggested model obtained a low training loss of 0.0022 during 20 training epochs, as well as an amazing model accuracy of 47.15 percentage. This research marks a substantial advancement in the field of Assamese abstractive text summarization, with intriguing implications for practical applications in NLP. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
18770509
Volume :
235
Database :
Supplemental Index
Journal :
Procedia Computer Science
Publication Type :
Academic Journal
Accession number :
177603684
Full Text :
https://doi.org/10.1016/j.procs.2024.04.104