Back to Search Start Over

Who Needs External References?—Text Summarization Evaluation Using Original Documents.

Authors :
Foysal, Abdullah Al
Böck, Ronald
Source :
AI. Dec2023, Vol. 4 Issue 4, p970-995. 26p.
Publication Year :
2023

Abstract

Nowadays, individuals can be overwhelmed by a huge number of documents being present in daily life. Capturing the necessary details is often a challenge. Therefore, it is rather important to summarize documents to obtain the main information quickly. There currently exist automatic approaches to this task, but their quality is often not properly assessed. State-of-the-art metrics rely on human-generated summaries as a reference for the evaluation. If no reference is given, the assessment will be challenging. Therefore, in the absence of human-generated reference summaries, we investigated an alternative approach to how machine-generated summaries can be evaluated. For this, we focus on the original text or document to retrieve a metric that allows a direct evaluation of automatically generated summaries. This approach is particularly helpful in cases where it is difficult or costly to find reference summaries. In this paper, we present a novel metric called Summary Score without Reference—SUSWIR—which is based on four factors already known in the text summarization community: Semantic Similarity, Redundancy, Relevance, and Bias Avoidance Analysis, overcoming drawbacks of common metrics. Therefore, we aim to close a gap in the current evaluation environment for machine-generated text summaries. The novel metric is introduced theoretically and tested on five datasets from their respective domains. The conducted experiments yielded noteworthy outcomes, employing the utilization of SUSWIR. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
26732688
Volume :
4
Issue :
4
Database :
Academic Search Index
Journal :
AI
Publication Type :
Academic Journal
Accession number :
174400844
Full Text :
https://doi.org/10.3390/ai4040049