Back to Search Start Over

Multitask Summary Scoring with Longformers

Authors :
Botarleanu, Robert-Mihai
Dascalu, Mihai
Allen, Laura K.
Crossley, Scott Andrew
McNamara, Danielle S.
Source :
Grantee Submission. 2022Paper presented at International Conference on Artificial Intelligence in Education (AIED) (2022).
Publication Year :
2022

Abstract

Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate summary scoring by evaluating a corpus of approximately 5,000 summaries based on 103 source texts, each summary being scored on a 4-point Likert scale for seven different evaluation criteria. We train and evaluate a series of Machine Learning models that use a combination of independent textual complexity indices from the ReaderBench framework and Deep Learning models based on the Transformer architecture in a multitask setup to predict concurrently all criteria. Our models achieve significantly lower errors than previous work using a similar dataset, with MAE ranging from 0.10-0.16 and corresponding R[superscript 2] values of up to 0.64. Our findings indicate that Longformer-based models are adequate for contextualizing longer text sequences and effectively scoring summaries according to a variety of human-defined evaluation criteria using a single Neural Network. [This paper was published in: "AIED 2022, LNCS 13355," edited by M. M. Rodrigo et al., Springer Nature Switzerland, 2022, pp. 756-761.]

Details

Language :
English
Database :
ERIC
Journal :
Grantee Submission
Publication Type :
Report
Accession number :
ED629735
Document Type :
Reports - Research<br />Speeches/Meeting Papers
Full Text :
https://doi.org/10.1007/978-3-031-11644-5_79