Back to Search
Start Over
Measuring Human Readability of Machine Generated Text: Three Case Studies in Speech Recognition and Machine Translation
- Source :
- ICASSP (5)
- Publication Year :
- 2006
- Publisher :
- IEEE, 2006.
-
Abstract
- We present highlights from three experiments that test the readability of current state-of-the art system output from: (1) an automated English speech-to-text (SST) system; (2) a text-based Arabic-to-English machine translation (MT) system; and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each case, applying standard psycholinguistic testing procedures and a modified version of the standard defense language proficiency test for Arabic called the DLPT*. We learned that: (1) subjects are slowed down by about 25% when reading system STT output; (2) text-based MT systems enable an English speaker to pass Arabic Level 2 on the DLPT*; and (3) audio-based MT systems do not enable English speakers to pass Arabic Level 2. We intend for these generic measures of readability to predict performance of more application-specific tasks.
- Subjects :
- Machine translation
Arabic
business.industry
Computer science
media_common.quotation_subject
Speech recognition
computer.software_genre
language.human_language
Psycholinguistics
Readability
Reading (process)
ComputingMethodologies_DOCUMENTANDTEXTPROCESSING
language
Language proficiency
Artificial intelligence
business
computer
Natural language
Natural language processing
media_common
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.
- Accession number :
- edsair.doi...........29233c590c049c79852099d1aa1f456c
- Full Text :
- https://doi.org/10.1109/icassp.2005.1416477