1. Automated Speaking Assessment of Conversation Tests with Novel Graph-based Modeling on Spoken Response Coherence
- Author
-
Li, Jiun-Ting, Yan, Bi-Cheng, Lo, Tien-Hong, Wang, Yi-Cheng, Hsu, Yung-Chang, and Chen, Berlin
- Subjects
Computer Science - Computation and Language - Abstract
Automated speaking assessment in conversation tests (ASAC) aims to evaluate the overall speaking proficiency of an L2 (second-language) speaker in a setting where an interlocutor interacts with one or more candidates. Although prior ASAC approaches have shown promising performance on their respective datasets, there is still a dearth of research specifically focused on incorporating the coherence of the logical flow within a conversation into the grading model. To address this critical challenge, we propose a hierarchical graph model that aptly incorporates both broad inter-response interactions (e.g., discourse relations) and nuanced semantic information (e.g., semantic words and speaker intents), which is subsequently fused with contextual information for the final prediction. Extensive experimental results on the NICT-JLE benchmark dataset suggest that our proposed modeling approach can yield considerable improvements in prediction accuracy with respect to various assessment metrics, as compared to some strong baselines. This also sheds light on the importance of investigating coherence-related facets of spoken responses in ASAC., Comment: Accepted by IEEE SLT 2024
- Published
- 2024