Back to Search
Start Over
What you see is what you hear: How visual prosody affects artificial language learning in adults and children
- Source :
- The Journal of the Acoustical Society of America. 133:3337-3337
- Publication Year :
- 2013
- Publisher :
- Acoustical Society of America (ASA), 2013.
-
Abstract
- Speech perception is a multimodal phenomenon, with what we see impacting what we hear. In this study, we examine how visual information impacts English listeners' segmentation of words from an artificial language containing no cues to word boundaries other than the transitional probabilities (TPs) between syllables. Participants (N=60) were assigned to one of three conditions: Still (still image), Trochaic (image loomed toward the listener at syllable onsets), or Iambic (image loomed toward the listener at syllable offsets). Participants also heard either an easy or difficult variant of the language. Importantly, both languages lacked auditory prosody. Overall performance in a 2AFC test was better in the easy (67%) than difficult language (57%). In addition, across languages, listeners performed best in the Trochaic Condition (67%) and worst in the Iambic Condition (56%). Performance in the Still Condition fell in between (61%). We know English listeners perceive strong syllables as word onsets. Thus, participants likely found the Trochaic Condition easiest because the moving image led them to perceive temporally co-occurring syllables as strong. We are currently testing 6-year-olds (N=25) with these materials. Thus far, children's performance collapsed across conditions is similar to adults (60%). However, visual information may impact children's performance less.
Details
- ISSN :
- 00014966
- Volume :
- 133
- Database :
- OpenAIRE
- Journal :
- The Journal of the Acoustical Society of America
- Accession number :
- edsair.doi.dedup.....6e52bc104db0f937dbfff3b3ae03c264
- Full Text :
- https://doi.org/10.1121/1.4805618