Back to Search
Start Over
Harnessing the potential of trace data and linguistic analysis to predict learner performance in a multi‐text writing task.
- Source :
- Journal of Computer Assisted Learning; Jun2023, Vol. 39 Issue 3, p703-718, 16p
- Publication Year :
- 2023
-
Abstract
- Background: Assignments that involve writing based on several texts are challenging to many learners. Formative feedback supporting learners in these tasks should be informed by the characteristics of evolving written product and by the characteristics of learning processes learners enacted while developing the product. However, formative feedback in writing tasks based on multiple texts has almost exclusively focused on essay product and rarely included SRL processes. Objectives: We explored the viability of using product and process features to develop machine learning classifiers that identify low‐ and high‐performing essays in a multi‐text writing task. Methods: We examined learning processes and essay submissions of 163 graduate students working on an authentic multi‐text writing assignment. We utilised learners' trace data to obtain process features and state‐of‐the‐art natural language processing methods to obtain product features for our classifiers. Results and Conclusions: Of four popular classifiers examined in this study, Random Forest achieved the best performance (accuracy = 0.80 and recall = 0.77). The analysis of important features identified in the Random Forest classification model revealed one product (coverage of reading topics) and three process (elaboration/organisation, re‐reading and planning) features as important predictors of writing quality. Major Takeaways: The classifier can be used as a part of a future automated writing evaluation system that will support at scale formative assessment in writing tasks based on multiple texts in different courses. Based on important predictors of essay performance, a guidance can be tailored to learners at the outset of a multi‐text writing task to help them do well in the task. Lay Description: What is already known about this topic?: Both product and process features should be used to inform formative feedback on writing.Providing product‐ and process‐oriented feedback to learners is challenging.Automatic writing evaluation systems have mainly relied upon product features.Automated analysis of learners' trace data and their essay drafts is a promising venue. What this paper adds?: An accurate machine learning classifier that identifies low‐ and high‐scoring essays.The classifier utilized both product and process features.We obtained process features from learners' trace data in digital learning environment.We computed product features using state‐of‐the‐art text analytical methods. Implications for practice and/or policy: The classifier can be used as a part of a future automated writing evaluation system.We revealed learning processes and essay characteristics that influence performance.Based on important predictors of performance, formative feedback can be given to learners. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 02664909
- Volume :
- 39
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Journal of Computer Assisted Learning
- Publication Type :
- Academic Journal
- Accession number :
- 163886539
- Full Text :
- https://doi.org/10.1111/jcal.12769