1. LLMs as Educational Analysts: Transforming Multimodal Data Traces into Actionable Reading Assessment Reports
- Author
-
Davalos, Eduardo, Zhang, Yike, Srivastava, Namrata, Salas, Jorge Alberto, McFadden, Sara, Cho, Sun-Joo, Biswas, Gautam, and Goodwin, Amanda
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction ,I.2.1 ,I.2.7 ,K.3.1 - Abstract
Reading assessments are essential for enhancing students' comprehension, yet many EdTech applications focus mainly on outcome-based metrics, providing limited insights into student behavior and cognition. This study investigates the use of multimodal data sources -- including eye-tracking data, learning outcomes, assessment content, and teaching standards -- to derive meaningful reading insights. We employ unsupervised learning techniques to identify distinct reading behavior patterns, and then a large language model (LLM) synthesizes the derived information into actionable reports for educators, streamlining the interpretation process. LLM experts and human educators evaluate these reports for clarity, accuracy, relevance, and pedagogical usefulness. Our findings indicate that LLMs can effectively function as educational analysts, turning diverse data into teacher-friendly insights that are well-received by educators. While promising for automating insight generation, human oversight remains crucial to ensure reliability and fairness. This research advances human-centered AI in education, connecting data-driven analytics with practical classroom applications., Comment: 15 pages, 5 figures, 3 tables
- Published
- 2025