Back to Search Start Over

Automated Scoring of Nonnative Speech Using the 'SpeechRater'? v. 5.0 Engine. Research Report. ETS RR-18-10

Authors :
Chen, Lei
Zechner, Klaus
Yoon, Su-Youn
Evanini, Keelan
Wang, Xinhao
Loukina, Anatassia
Tap, Jidong
Davis, Lawrence
Lee, Chong Min
Ma, Min
Mundowsky, Robert
Lu, Chi
Leong, Chee Wee
Gyawali, Binod
Source :
ETS Research Report Series. Dec 2018.
Publication Year :
2018

Abstract

This research report provides an overview of the R&D efforts at Educational Testing Service related to its capability for automated scoring of nonnative spontaneous speech with the "SpeechRater"? automated scoring service since its initial version was deployed in 2006. While most aspects of this R&D work have been published in various venues in recent years, no comprehensive account of the current state of SpeechRater has been provided since the initial publications following its first operational use in 2006. After a brief review of recent related work by other institutions, we summarize the main features and feature classes that have been developed and introduced into SpeechRater in the past 10 years, including features measuring aspects of pronunciation, prosody, vocabulary, grammar, content, and discourse. Furthermore, new types of filtering models for flagging nonscorable spoken responses are described, as is our new hybrid way of building linear regression scoring models with improved feature selection. Finally, empirical results for SpeechRater 5.0 (operationally deployed in 2016) are provided.

Details

Language :
English
ISSN :
2330-8516
Database :
ERIC
Journal :
ETS Research Report Series
Publication Type :
Academic Journal
Accession number :
EJ1202795
Document Type :
Journal Articles<br />Reports - Research