86,522 results on '"AUDITORY perception"'
Search Results
2. Harmonies on the String: Exploring the Synergy of Music and STEM
- Author
-
Christopher Dignam
- Abstract
The process of perceiving music involves the transference of environmental physics in the air to anatomical and physiological interpretations of resonance in the body and psychological perceptions in the brain. These processes and musical interpretations are the basis of physical and cognitive science, neurophysiology, psychoacoustics, and cultural psychology. The intersection of interdisciplinary and transdisciplinary curricular offerings forms the basis of STEM (Science, Technology, Engineering, and Mathematics). In this study, the researcher explores the synergy of music in STEM for formulating and affording authentic STEAM programmatic offerings for learners. The blending of the art of music within STEM provides opportunities for teachers and students to address and connect content through creative, innovative approaches for deeper, meaningful learning. Threading the art of music within STEM affords discovery-learning opportunities that facilitate both critical thinking and social, emotional learning skills development in students. This study provides perspective in terms of developing curricular offerings for students that blend physical and cognitive science with the art of sound. The researcher provides authentic curricular exemplars regarding the synergy of music in STEM and concludes by offering recommendations for designing and implementing expressive curricular programmatic offerings for students from early childhood settings through higher education.
- Published
- 2024
3. Listener Perception of Appropriateness of L1 and L2 Refusals in English
- Author
-
Maria Kostromitina and Yongzhi Miao
- Abstract
English has become an international language (EIL) as speakers around the world use it as a universal means of communication. Accordingly, scholars have investigated different aspects of EIL affecting communicative success. Speech scholars have been interested in speech constructs like accentedness, comprehensibility, and acceptability (e.g., Kang et al., 2023). On the other hand, pragmatic researchers have examined lexico-grammatical features of EIL that contribute to first language (L1) English listeners' perceptions of appropriateness in speech acts (e.g., Taguchi, 2006). However, little is known about: a) how appropriateness is perceived by users of EIL of diverse L1s and b) how those appropriateness perceptions are related to lexico-grammatical and phonological features. Therefore, the present study had 184 listeners (L1 = English, Spanish, Chinese, and Indian languages) evaluate 40 speech acts performed by 20 speakers (L1 English and Chinese, 50% each) in terms of appropriateness on a 9-point numerical scale. Results from linear mixed-effects regressions suggested that: a) listener L1 did not contribute to listener ratings and b) speakers' rhythm and lexico-grammatical features (i.e., use of different pragmatic strategies) significantly contributed to listener appropriateness ratings. The findings provide empirical evidence to support the phonology-pragmatics link in appropriateness perceptions and offer implications regarding the operationalization of English interactional appropriateness.
- Published
- 2024
4. Investigating EFL Students' Perspectives of the Influence of Podcasts on Enhancing Listening Proficiency
- Author
-
Fatimah Ghazi Mohamm and Hanadi Abdulrahman Khadawardi
- Abstract
Listening is widely regarded as the predominant language proficiency utilized in virtually all forms of communication. However, its intricacies often engender feelings of complexity and, at times, provoke anxiety and frustration among both foreign and second-language learners. The enhancement of successful communication fundamentally hinges upon the precise comprehension of spoken messages. In this quantitative investigation, the present study delves into the perceptions of English as a Foreign Language (EFL) students concerning the utilization of podcasts as a tool to cultivate and bolster their listening proficiency. The study cohort comprised female university students enrolled in a preparatory year program. The examination of attitudes toward podcasts was conducted via a survey questionnaire. The findings unveiled that most participants derived enjoyment from utilizing podcasts, which in turn catalyzed their enthusiasm for English language acquisition. Additionally, they conceded that podcasts held promise in augmenting their linguistic abilities, with a primary focus on listening comprehension. These outcomes posit that podcasting serves as a medium with significant implications for students' learning trajectories, particularly regarding the acquisition of listening proficiencies.
- Published
- 2024
5. Intact Utilization of Contextual Information in Speech Categorization in Autism
- Author
-
Yafit Gabay, Eva Reinisch, Dana Even, Nahal Binur, and Bat-Sheva Hadad
- Abstract
Current theories of Autism Spectrum Disorder (ASD) suggest atypical use of context in ASD, but little is known about how these atypicalities influence speech perception. We examined the influence of contextual information (lexical, spectral, and temporal) on phoneme categorization of people with ASD and in typically developed (TD) people. Across three experiments, we found that people with ASD used all types of contextual information for disambiguating speech sounds to the same extent as TD; yet they exhibited a shallower identification curve when phoneme categorization required temporal processing. Overall, the results suggest that the observed atypicalities in speech perception in ASD, including the reduced sensitivity observed here, cannot be attributed merely to the limited ability to utilize context during speech perception.
- Published
- 2024
- Full Text
- View/download PDF
6. The Effect of Attention on Auditory Processing in Adults on the Autism Spectrum
- Author
-
Jewel E. Crasta and Erica C. Jacoby
- Abstract
This study examined the effect of attention on auditory processing in autistic individuals. Electroencephalography data were recorded during two attention conditions (passive and active) from 24 autistic adults and 24 neurotypical controls, ages 17-30 years. The passive condition involved only listening to the clicks and the active condition involved a button press following single clicks in a modified paired-click paradigm. Participants completed the Adolescent/Adult Sensory Profile and the Social Responsiveness Scale 2. The autistic group showed delayed N1 latencies and reduced evoked and phase-locked gamma power compared to neurotypical peers across both clicks and conditions. Longer N1 latencies and reduced gamma synchronization predicted greater social and sensory symptoms. Directing attention to auditory stimuli may be associated with more typical neural auditory processing in autism.
- Published
- 2024
- Full Text
- View/download PDF
7. The Relationship between Autism and Pitch Perception Is Modulated by Cognitive Abilities
- Author
-
Jia Hoong Ong, Chen Zhao, Alex Bacon, Florence Yik Nam Leung, Anamarija Veic, Li Wang, Cunmei Jiang, and Fang Liu
- Abstract
Previous studies reported mixed findings on autistic individuals' pitch perception relative to neurotypical (NT) individuals. We investigated whether this may be partly due to individual differences in cognitive abilities by comparing their performance on various pitch perception tasks on a large sample (n = 164) of autistic and NT children and adults. Our findings revealed that: (i) autistic individuals either showed similar or worse performance than NT individuals on the pitch tasks; (ii) cognitive abilities were associated with some pitch task performance; and (iii) cognitive abilities modulated the relationship between autism diagnosis and pitch perception on some tasks. Our findings highlight the importance of taking an individual differences approach to understand the strengths and weaknesses of pitch processing in autism.
- Published
- 2024
- Full Text
- View/download PDF
8. Do Early Musical Impairments Predict Later Reading Difficulties? A Longitudinal Study of Pre-Readers with and without Familial Risk for Dyslexia
- Author
-
Manon Couvignou, Hugo Peyre, Franck Ramus, and Régine Kolinsky
- Abstract
The present longitudinal study investigated the hypothesis that early musical skills (as measured by melodic and rhythmic perception and memory) predict later literacy development via a mediating effect of phonology. We examined 130 French-speaking children, 31 of whom with a familial risk for developmental dyslexia (DD). Their abilities in the three domains were assessed longitudinally with a comprehensive battery of behavioral tests in kindergarten, first grade, and second grade. Using a structural equation modeling approach, we examined potential longitudinal effects from music to literacy via phonology. We then investigated how familial risk for DD may influence these relationships by testing whether atypical music processing is a risk factor for DD. Results showed that children with a familial risk for DD consistently underperformed children without familial risk in music, phonology, and literacy. A small effect of musical ability on literacy via phonology was observed, but may have been induced by differences in stability across domains over time. Furthermore, early musical skills did not add significant predictive power to later literacy difficulties beyond phonological skills and family risk status. These findings are consistent with the idea that certain key auditory skills are shared between music and speech processing, and between DD and congenital amusia. However, they do not support the notion that music perception and memory skills can serve as a reliable early marker of DD, nor as a valuable target for reading remediation.
- Published
- 2024
- Full Text
- View/download PDF
9. Comparison of Speech and Music Input in North American Infants' Home Environment over the First 2 Years of Life
- Author
-
Lindsay Hippe, Victoria Hennessy, Naja Ferjan Ramirez, and T. Christina Zhao
- Abstract
Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4
- Published
- 2024
- Full Text
- View/download PDF
10. The Sound of Silence: Children's Own Perspectives on Their Hearing and Listening in Classrooms with Different Acoustic Conditions
- Author
-
Giulia Vettori, Laura Di Leonardo, Simone Secchi, and Lucia Bigozzi
- Abstract
In this study, we investigated primary school children's perspectives on their hearing and listening in classrooms with different acoustic quality levels. The sample included 213 children. The children completed a self-report questionnaire rating how well they could hear and listen in various situations in classrooms with two different acoustic conditions: "Poor acoustic quality" (long reverberation time [Long RT]) versus "Adequate acoustic quality" (short reverberation time [Short RT]) equipped with a sound-absorbing system. The results showed that auditory perception in the two conditions depends on the child's age, with only fourth- and fifth-grade children reporting benefits from classroom acoustic correction. Our study provides preliminary results on children's perspectives regarding their hearing and listening experiences during school learning, drawing out the implications for the design and implementation of school metacognitive interventions aimed at improving children's and teachers' awareness of motivational-affective, regulative, and environmental aspects favoring listening at school.
- Published
- 2024
- Full Text
- View/download PDF
11. Recognitions of Image and Speech to Improve Learning Diagnosis on STEM Collaborative Activity for Precision Education
- Author
-
Chia-Ju Lin, Wei-Sheng Wang, Hsin-Yu Lee, Yueh-Min Huang, and Ting-Ting Wu
- Abstract
The rise of precision education has encouraged teachers to use intelligent diagnostic systems to understand students' learning processes and provide immediate guidance to prevent students from giving up when facing learning difficulties. However, current research on precision education rarely employs multimodal learning analytics approaches to understand students' learning behaviors. Therefore, this study aims to investigate the impact of teachers intervene based on different modalities of learning analytics diagnosing systems on students' learning behaviors, learning performance, and motivation in STEM collaborative learning activities. We conducted a quasi-experiment with three groups: a control group without any learning analytics system assistance, experimental group 1 with a unimodal learning analytics approach based on image data, and experimental group 2 with a multimodal learning analytics approach based on both image and voice data. We collected students' image or voice data according to the experimental design and employed artificial intelligence techniques for facial expression recognition, eye gaze tracking, and speech recognition to identify students' learning behaviors. The results of this research indicate that teacher interventions, augmented by learning analytics systems, have a significant positive impact on student learning outcomes and motivation. In experimental group 2, the acquisition of multimodal data facilitated a more precise identification and addressing of student learning challenges. Relative to the control group, students in the experimental groups exhibited heightened self-efficacy and were more motivated learners. Moreover, students in experimental group 2 demonstrated a deeper level of engagement in collaborative processes and the behavior associated with constructing knowledge.
- Published
- 2024
- Full Text
- View/download PDF
12. Brief Report: Characterization of Sensory Over-Responsivity in a Broad Neurodevelopmental Concern Cohort Using the Sensory Processing Three Dimensions (SP3D) Assessment
- Author
-
Maia C. Lazerwitz, Mikaela A. Rowe, Kaitlyn J. Trimarchi, Rafael D. Garcia, Robyn Chu, Mary C. Steele, Shalin Parekh, Jamie Wren-Jarvis, Ioanna Bourla, Ian Mark, Elysa J. Marco, and Pratik Mukherjee
- Abstract
Sensory Over-Responsivity (SOR) is an increasingly recognized challenge among children with neurodevelopmental concerns (NDC). To investigate, we characterized the incidence of auditory and tactile over-responsivity (AOR, TOR) among 82 children with NDC. We found that 70% of caregivers reported concern for their child's sensory reactions. Direct assessment further revealed that 54% of the NDC population expressed AOR, TOR, or both -- which persisted regardless of autism spectrum disorder (ASD) diagnosis. These findings support the high prevalence of SOR as well as its lack of specificity to ASD. Additionally, AOR is revealed to be over twice as prevalent as TOR. These conclusions present several avenues for further exploration, including deeper analysis of the neural mechanisms and genetic contributors to sensory processing challenges.
- Published
- 2024
- Full Text
- View/download PDF
13. Event Boundary Perception in Audio Described Films by People without Sight
- Author
-
Roger Johansson, Tina Rastegar, Viveka Lyberg-Åhlander, and Jana Holsanova
- Abstract
Audio description (AD) plays a crucial role in making audiovisual media accessible to people with a visual impairment, enhancing their experience and understanding. This study employs an event segmentation task to examine how people without sight perceive and segment narrative events in films with AD, compared to sighted viewers without AD. Two AD versions were utilized, differing in the explicitness of conveyed event boundaries. Results reveal that the participants without sight generally perceived event boundaries similarly to their sighted peers, affirming AD's effectiveness in conveying event structures. However, when key event boundaries were more implicitly expressed, event boundary recognition diminished. Collectively, these findings offer valuable insights into event segmentation processes across sensory modalities. Additionally, they underscore the significance of how AD presents event boundaries, influencing the perception and interpretation of audiovisual media for people with a visual impairment and providing applied insights into event segmentation, multimodal processing, and audiovisual accessibility.
- Published
- 2024
- Full Text
- View/download PDF
14. Auditory Challenges and Listening Effort in School-Age Children with Autism: Insights from Pupillary Dynamics during Speech-in-Noise Perception
- Author
-
Suyun Xu, Hua Zhang, Juan Fan, Xiaoming Jiang, Minyue Zhang, Jingjing Guan, Hongwei Ding, and Yang Zhang
- Abstract
Purpose: This study aimed to investigate challenges in speech-in-noise (SiN) processing faced by school-age children with autism spectrum conditions (ASCs) and their impact on listening effort. Method: Participants, including 23 Mandarin-speaking children with ASCs and 19 age-matched neurotypical (NT) peers, underwent sentence recognition tests in both quiet and noisy conditions, with a speech-shaped steady-state noise masker presented at 0-dB signal-to-noise ratio in the noisy condition. Recognition accuracy rates and task-evoked pupil responses were compared to assess behavioral performance and listening effort during auditory tasks. Results: No main effect of group was found on accuracy rates. Instead, significant effects emerged for autistic trait scores, listening conditions, and their interaction, indicating that higher trait scores were associated with poorer performance in noise. Pupillometric data revealed significantly larger and earlier peak dilations, along with more varied pupillary dynamics in the ASC group relative to the NT group, especially under noisy conditions. Importantly, the ASC group's peak dilation in quiet mirrored that of the NT group in noise. However, the ASC group consistently exhibited reduced mean dilations than the NT group. Conclusions: Pupillary responses suggest a different resource allocation pattern in ASCs--An initial sharper and larger dilation may signal an intense, narrowed resource allocation, likely linked to heightened arousal, engagement, and cognitive load, whereas a subsequent faster tail-off may indicate a greater decrease in resource availability and engagement, or a quicker release of arousal and cognitive load. The presence of noise further accentuates this pattern. This highlights the unique SiN processing challenges children with ASCs may face, underscoring the importance of a nuanced, individual-centric approach for interventions and support.
- Published
- 2024
- Full Text
- View/download PDF
15. Phonolexical Processing of Mandarin Segments and Tones by English Speakers at Different Mandarin Proficiency Levels
- Author
-
Yen-Chen Hao
- Abstract
The current study examined the phonolexical processing of Mandarin segments and tones by English speakers at different Mandarin proficiency levels. Eleven English speakers naive to Mandarin, 15 intermediate and 9 advanced second language (L2) learners participated in a word-learning experiment. After learning the sound and meaning of 16 Mandarin disyllabic words, they judged the matching between sound and meaning pairs, with half of the pairs being complete matches while the other half contained segmental or tonal mismatches. The results showed that all three groups were more sensitive to segmental than tonal mismatches. The two learner groups outperformed the Naive group on segmental mismatches but not on tonal mismatches. However, their reaction times revealed that the learners but not the Naive group attended to tonal variations. The current findings suggest that increasing L2 experience has limited benefit on learners' phonolexical processing of L2 tones, probably due to their non-tonal native language background. Experience in a tonal L2 may enhance learners' attention to the tonal dimension but may not necessarily improve their accuracy.
- Published
- 2024
- Full Text
- View/download PDF
16. The Not-so-Slight Perceptual Consequences of Slight Hearing Loss in School-Age Children: A Scoping Review
- Author
-
Chhayakanta Patro and Srikanta Kumar Mishra
- Abstract
Purpose: This study aimed to conduct a scoping review of research exploring the effects of slight hearing loss on auditory and speech perception in children. Method: A comprehensive search conducted in August 2023 identified a total of 402 potential articles sourced from eight prominent bibliographic databases. These articles were subjected to rigorous evaluation for inclusion criteria, specifically focusing on their reporting of speech or auditory perception using psychoacoustic tasks. The selected studies exclusively examined school-age children, encompassing those between 5 and 18 years of age. Following rigorous evaluation, 10 articles meeting these criteria were selected for inclusion in the review. Results: The analysis of included articles consistently shows that even slight hearing loss in school-age children significantly affects their speech and auditory perception. Notably, most of the included articles highlighted a common trend, demonstrating that perceptual deficits originating due to slight hearing loss in children are particularly observable under challenging experimental conditions and/or in cognitively demanding listening tasks. Recent evidence further underscores that the negative impacts of slight hearing loss in school-age children cannot be solely predicted by their pure-tone thresholds alone. However, there is limited evidence concerning the effect of slight hearing loss on the segregation of competing speech, which may be a better representation of listening in the classroom. Conclusion: This scoping review discusses the perceptual consequences of slight hearing loss in school-age children and provides insights into an array of methodological issues associated with studying perceptual skills in school-age children with slight hearing losses, offering guidance for future research endeavors.
- Published
- 2024
- Full Text
- View/download PDF
17. Acoustic and Semantic Processing of Auditory Scenes in Children with Autism Spectrum Disorders
- Author
-
Breanne D. Yerkes, Christina M. Vanden Bosch der Nederlanden, Julie F. Beasley, Erin E. Hannon, and Joel S. Snyder
- Abstract
Purpose: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information. Methods: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n = 27) rely on acoustic and semantic information, compared to age-matched (n = 27) and IQ-matched (n = 27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n = 105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information. Results: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched--but not IQ-matched--TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information. Conclusion: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks.
- Published
- 2024
- Full Text
- View/download PDF
18. Using Chatbots to Support EFL Listening Decoding Skills in a Fully Online Environment
- Author
-
Weijiao Huang, Chengyuan Jia, Khe Foon Hew, and Jia Guo
- Abstract
Aural decoding skill is an important contributor to successful EFL listening comprehension. This paper first described a preliminary study involving a 12-week undergraduate flipped decoding course, based on the flipped SEF-ARCS decoding model. Although the decoding model (N = 44) was significantly more effective in supporting students' decoding performance than a conventional decoding course (N = 36), two main challenges were reported: teacher's excessive workload, and high requirement for the individual teacher's decoding skills. To address these challenges, we developed a chatbot based on the self-determination theory and social presence theory to serve as a 24/7 conversational agent, and adapted the flipped decoding course to a fully online chatbot-supported learning course to reduce the dependence on the teacher. Although results revealed that the chatbot-supported fully online group (N = 46) and the flipped group (N = 43) performed equally well in decoding test, the chatbot-supported fully online approach was more effective in supporting students' behavioral and emotional engagement than the flipped learning approach. Students' perceptions of the chatbot-supported decoding activities were also explored. This study provides a useful pedagogical model involving the innovative use of chatbot to develop undergraduate EFL aural decoding skills in a fully online environment.
- Published
- 2024
19. Effect of Age and Unaided Acoustic Hearing on Pediatric Cochlear Implant Users' Ability to Distinguish Yes/No Statements and Questions
- Author
-
Emily Buss, Margaret E. Richter, Victoria N. Sweeney, Amanda G. Davis, Margaret T. Dillon, and Lisa R. Park
- Abstract
Purpose: The purpose of this study was to evaluate the ability to discriminate yes/no questions from statements in three groups of children--bilateral cochlear implant (CI) users, nontraditional CI users with aidable hearing preoperatively in the ear to be implanted, and controls with normal hearing. Half of the nontraditional CI users had sufficient postoperative acoustic hearing in the implanted ear to use electric-acoustic stimulation, and half used a CI alone. Method: Participants heard recorded sentences that were produced either as yes/no questions or as statements by three male and three female talkers. Three raters scored each participant response as either a question or a statement. Bilateral CI users (n = 40, 4-12 years old) and normal-hearing controls (n = 10, 4-12 years old) were tested binaurally in the free field. Nontraditional CI recipients (n = 22, 6-17 years old) were tested with direct audio input to the study ear. Results: For the bilateral CI users, performance was predicted by age but not by 125-Hz acoustic thresholds; just under half (n = 17) of the participants in this group had measurable 125-Hz thresholds in their better ear. For nontraditional CI recipients, better performance was predicted by lower 125-Hz acoustic thresholds in the test ear, and there was no association with participant age. Performance approached that of the normal-hearing controls for some participants in each group. Conclusions: Results suggest that a 125-Hz acoustic hearing supports discrimination of yes/no questions and statements in pediatric CI users. Bilateral CI users with little or no acoustic hearing at 125 Hz develop the ability to perform this task, but that ability emerges later than for children with better acoustic hearing. These results underscore the importance of preserving acoustic hearing for pediatric CI users when possible.
- Published
- 2024
- Full Text
- View/download PDF
20. How Hearing Loss and Cochlear Implantation Affect Verbal Working Memory: Evidence from Adolescents
- Author
-
Susan Nittrouer
- Abstract
Purpose: Verbal working memory is poorer for children with hearing loss than for peers with normal hearing (NH), even with cochlear implantation and early intervention. Poor verbal working memory can affect academic performance, especially in higher grades, making this deficit a significant problem. This study examined the stability of verbal working memory across middle childhood, tested working memory in adolescents with NH or cochlear implants (CIs), explored whether signal enhancement can improve verbal working memory, and tested two hypotheses proposed to explain the poor verbal working memory of children with hearing loss: (a) Diminished auditory experience directly affects executive functions, including working memory; (b) degraded auditory inputs inhibit children's abilities to recover the phonological structure needed for encoding verbal material into storage. Design: Fourteen-year-olds served as subjects: 55 with NH; 52 with CIs. Immediate serial recall tasks were used to assess working memory. Stimuli consisted of nonverbal, spatial stimuli and four kinds of verbal, acoustic stimuli: nonrhyming and rhyming words, and nonrhyming words with two kinds of signal enhancement: audiovisual and indexical. Analyses examined (a) stability of verbal working memory across middle childhood, (b) differences in verbal and nonverbal working memory, (c) effects of signal enhancement on recall, (d) phonological processing abilities, and (e) source of the diminished verbal working memory in adolescents with cochlear implants. Results: Verbal working memory remained stable across middle childhood. Adolescents across groups performed similarly for nonverbal stimuli, but those with CIs displayed poorer recall accuracy for verbal stimuli; signal enhancement did not improve recall. Poor phonological sensitivity largely accounted for the group effect. Conclusions: The central executive for working memory is not affected by hearing loss or cochlear implantation. Instead, the phonological deficit faced by adolescents with CIs denigrates the representation in storage and augmenting the signal does not help.
- Published
- 2024
- Full Text
- View/download PDF
21. No Differences in Auditory Steady-State Responses in Children with Autism Spectrum Disorder and Typically Developing Children
- Author
-
Seppo P. Ahlfors, Steven Graham, Hari Bharadwaj, Fahimeh Mamashli, Sheraz Khan, Robert M. Joseph, Ainsley Losh, Stephanie Pawlyszyn, Nicole M. McGuiggan, Mark Vangel, Matti S. Hämäläinen, and Tal Kenet
- Abstract
Auditory steady-state response (ASSR) has been studied as a potential biomarker for abnormal auditory sensory processing in autism spectrum disorder (ASD), with mixed results. Motivated by prior somatosensory findings of group differences in inter-trial coherence (ITC) between ASD and typically developing (TD) individuals at twice the steady-state stimulation frequency, we examined ASSR at 25 and 50 as well as 43 and 86 Hz in response to 25-Hz and 43-Hz auditory stimuli, respectively, using magnetoencephalography. Data were recorded from 22 ASD and 31 TD children, ages 6-17 years. ITC measures showed prominent ASSRs at the stimulation and double frequencies, without significant group differences. These results do not support ASSR as a robust ASD biomarker of abnormal auditory processing in ASD. Furthermore, the previously observed atypical double-frequency somatosensory response in ASD did not generalize to the auditory modality. Thus, the hypothesis about modality-independent abnormal local connectivity in ASD was not supported.
- Published
- 2024
- Full Text
- View/download PDF
22. Developmental Effects in the 'Vocale Rapide dans le Bruit' Speech-in-Noise Identification Test: Reference Performances of Normal-Hearing Children and Adolescents
- Author
-
Lionel Fontan and Jeanne Desreumaux
- Abstract
Purpose: The main objective of this study was to assess the existence of developmental effects on the performance of the Vocale Rapide dans le Bruit (VRB) speech-in-noise (SIN) identification test that was recently developed for the French language and to collect reference scores for children and adolescents. Method: Seventy-two native French speakers, aged 10-20 years, participated in the study. Each participant listened and repeated four lists of eight sentences, each containing three key words to be scored. The sentences were presented in free field at different signal-to-noise ratios (SNRs) using a four-talker babble noise. The SNR yielding 50% of correct repetitions of key words (SNR[subscript 50]) was recorded for each list. Results: A strong relationship between age and SNR[subscript 50] was found, better performance occurring with increasing age (average drop in SNR[subscript 50] per year: 0.34 dB). Large differences (Cohen's d [greater than or equal to] 1.2) were observed between the SNR[subscript 50] achieved by 10- to 13-year-old participants and those of adults. For participants aged 14-15 years, the difference fell just above the 5% level of significance. No effects of hearing thresholds or level of education were observed. Conclusions: The study confirms the existence of developmental effects on SIN identification performance as measured using the VRB test and provides reference data for taking into account these effects during clinical practice. Explanations as to why age effects perdure during adolescence are discussed.
- Published
- 2024
- Full Text
- View/download PDF
23. Evaluating Speaker-Listener Cognitive Effort in Speech Communication through Brain-to-Brain Synchrony: A Pilot Functional Near-Infrared Spectroscopy Investigation
- Author
-
Geoff D. Green II, Ewa Jacewicz, Hendrik Santosa, Lian J. Arzbecker, and Robert A. Fox
- Abstract
Purpose: We explore a new approach to the study of cognitive effort involved in listening to speech by measuring the brain activity in a listener in relation to the brain activity in a speaker. We hypothesize that the strength of this brain-to-brain synchrony (coupling) reflects the magnitude of cognitive effort involved in verbal communication and includes both listening effort and speaking effort. We investigate whether interbrain synchrony is greater in native-to-native versus native-to-nonnative communication using functional near-infrared spectroscopy (fNIRS). Method: Two speakers participated, a native speaker of American English and a native speaker of Korean who spoke English as a second language. Each speaker was fitted with the fNIRS cap and told short stories. The native English speaker provided the English narratives, and the Korean speaker provided both the nonnative (accented) English and Korean narratives. In separate sessions, fNIRS data were obtained from seven English monolingual participants ages 20-24 years who listened to each speaker's stories. After listening to each story in native and nonnative English, they retold the content, and their transcripts and audio recordings were analyzed for comprehension and discourse fluency, measured in the number of hesitations and articulation rate. No story retellings were obtained for narratives in Korean (an incomprehensible language for English listeners). Utilizing fNIRS technique termed sequential scanning, we quantified the brain-to-brain synchronization in each speaker-listener dyad. Results: For native-to-native dyads, multiple brain regions associated with various linguistic and executive functions were activated. There was a weaker coupling for native-to-nonnative dyads, and only the brain regions associated with higher order cognitive processes and functions were synchronized. All listeners understood the content of all stories, but they hesitated significantly more when retelling stories told in accented English. The nonnative speaker hesitated significantly more often than the native speaker and had a significantly slower articulation rate. There was no brain-to-brain coupling during listening to Korean, indicating a break in communication when listeners failed to comprehend the speaker. Conclusions: We found that effortful speech processing decreased interbrain synchrony and delayed comprehension processes. The obtained brain-based and behavioral patterns are consistent with our proposal that cognitive effort in verbal communication pertains to both the listener and the speaker and that brain-to-brain synchrony can be an indicator of differences in their cumulative communicative effort.
- Published
- 2024
- Full Text
- View/download PDF
24. The Effect of Collective Sight-Singing before Melodic Dictation: A Pilot Study
- Author
-
Caroline Caregnato, Ronaldo da Silva, Cristiane Hatsue Vital Otutumi, and Luciano Jeyson Santos da Rocha
- Abstract
Sight-singing and musical dictation are considered as complementary activities by different Ear Training pedagogues but, surprisingly, studies conducted with participants working individually were not able to find benefits of singing associated with dictation taking. This pilot study aims at observing the effect of a sight-singing, performed collectively before melodic dictation, on dictation results. We carried out an experimental study involving 54 students from three universities, who were tested in situations emulating Ear Training classes. The experimental group performed a collective sight-singing before the dictation, and the control group remained silent during the activity. Statistical analyses demonstrated that the experimental group had a significantly better performance on dictation than the control group, showing new data in relation to previous researches, that did not observe contributions of sight-singing related to dictation taking. We believe that collective sight-singing promotes cooperation between students, leading to better performance on reading than individual activities, thus improving dictation results. Although our pilot study counted on a small number of participants, remaining the necessity of future research expanding this one, it points to the potential benefits that collective activities could bring to the often-individualized instruction in Ear Training classes.
- Published
- 2024
- Full Text
- View/download PDF
25. Techniques and Resources for Teaching and Learning Bird Sounds
- Author
-
Caitlin Beebe and W. Douglas Robinson
- Abstract
The sounds of birds form the outdoor playlist of our lives. Birds appeal to the public, in part because of the wide variety of interesting sounds they make. This popularity has led to a long history of amateur participation in ornithology, which has recently produced rapid increases in freely available online databases with hundreds of thousands of bird sounds recorded by birdwatchers. These databases provide unique opportunities for teachers to guide students through processes to learn to identify bird species by their sounds. The techniques we summarize here include combining the auditory components of recognizing different types of sounds birds make with visual components of reading sonograms, widely available visual representations of sounds.
- Published
- 2024
- Full Text
- View/download PDF
26. Nasal/Oral Vowel Perception in French-Speaking Children with Cochlear Implants and Children with Typical Hearing
- Author
-
Sophie Fagniart, Véronique Delvaux, Bernard Harmegnies, Anne Huberlant, Kathy Huet, Myriam Piccaluga, Isabelle Watterman, and Brigitte Charlier
- Abstract
Purpose: The present study investigates the perception of vowel nasality in French-speaking children with cochlear implants (CIs; CI group) and children with typical hearing (TH; TH group) aged 4-12 years. By investigating the vocalic nasality feature in French, the study aims to document more broadly the effects of the acoustic limitations of CI in processing segments characterized by acoustic cues that require optimal spectral resolution. The impact of various factors related to children's characteristics, such as chronological/auditory age, age of implantation, and exposure to cued speech, has been studied on performance, and the acoustic characteristics of the stimuli in perceptual tasks have also been investigated. Method: Identification and discrimination tasks involving French nasal and oral vowels were administered to two groups of children: 13 children with CIs (CI group) and 25 children with TH (TH group) divided into three age groups (4-6 years, 7-9 years, and 10-12 years). French nasal vowels were paired with their oral phonological counterpart (phonological pairing) as well as to the closest oral vowel in terms of phonetic proximity (phonetic pairing). Post hoc acoustic analyses of the stimuli were linked to the performance in perception. Results: The results indicate an effect of the auditory status on the performance in the two tasks, with the CI group performing at a lower level than the TH group. However, the scores of the children in the CI group are well above chance level, exceeding 80%. The most common errors in identification were substitutions between nasal vowels and phonetically close oral vowels as well as confusions between the phoneme /u/ and other oral vowels. Phonetic pairs showed lower discrimination performance in the CI group with great variability in the results. Age effects were observed only in TH children for nasal vowel identification, whereas in children with CIs, a positive impact of cued speech practice and early implantation was found. Differential links between performance and acoustic characteristics were found within our groups, suggesting that in children with CIs, selective use of certain acoustic features, presumed to be better transmitted by the implant, leads to better perceptual performance. Conclusions: The study's results reveal specific challenges in children with CIs when processing segments characterized by fine spectral resolution cues. However, the CI children in our study appear to effectively compensate for these difficulties by utilizing various acoustic cues assumed to be well transmitted by the implant, such as cues related to the temporal resolution of stimuli.
- Published
- 2024
- Full Text
- View/download PDF
27. Investigating Perception to Production Transfer in Children with Cochlear Implants: A High Variability Phonetic Training Study
- Author
-
Hao Zhang, Xuequn Dai, Wen Ma, Hongwei Ding, and Yang Zhang
- Abstract
Purpose: This study builds upon an established effective training method to investigate the advantages of high variability phonetic identification training for enhancing lexical tone perception and production in Mandarin-speaking pediatric cochlear implant (CI) recipients, who typically face ongoing challenges in these areas. Method: Thirty-two Mandarin-speaking children with CIs were quasirandomly assigned into the training group (TG) and the control group (CG). The 16 TG participants received five sessions of high variability phonetic training (HVPT) within a period of 3 weeks. The CG participants did not receive the training. Perception and production of Mandarin tones were administered before (pretest) and immediately after (posttest) the completion of HVPT via lexical tone recognition task and picture naming task. Both groups participated in the identical pretest and posttest with the same time frame between the two test sessions. Results: TG showed significant improvement from pretest to posttest in identifying Mandarin tones for both trained and untrained speech stimuli. Moreover, perceptual learning of HVPT significantly facilitated trainees' production of T1 and T2 as rated by a cohort of 10 Mandarin-speaking adults with normal hearing, which was corroborated by acoustic analyses revealing improved fundamental frequency (F0) median for T1 and T2 production and enlarged F0 movement for T2 production. In contrast, TG children's production of T3 and T4 showed nonsignificant changes across two test sessions. Meanwhile, CG did not exhibit significant changes in either perception or production. Conclusions: The results suggest a limited and inconsistent transfer of perceptual learning to lexical tone production in children with CIs, which challenges the notion of a robust transfer and highlights the complexity of the interaction between perceptual training and production outcomes. Further research on individual differences with a longitudinal design is needed to optimize the training protocol or tailor interventions to better meet the diverse needs of learners.
- Published
- 2024
- Full Text
- View/download PDF
28. Mandarin-Speaking Amusics' Online Recognition of Tone and Intonation
- Author
-
Lirong Tang, Yangxiaoxue Xu, Shiting Yang, Xiangyun Meng, Boqi Du, Chen Sun, Li Liu, Qi Dong, and Yun Nan
- Abstract
Purpose: Congenital amusia is a neurogenetic disorder of musical pitch processing. Its linguistic consequences have been examined separately for speech intonations and lexical tones. However, in a tonal language such as Chinese, the processing of intonations and lexical tones interacts with each other during online speech perception. Whether and how the musical pitch disorder might affect linguistic pitch processing during online speech perception remains unknown. Method: We investigated this question with intonation (question vs. statement) and lexical tone (rising Tone 2 vs. falling Tone 4) identification tasks using the same set of sentences, comparing behavioral and event-related potential measurements between Mandarin-speaking amusics and matched controls. We specifically focused on the amusics without behavioral lexical tone deficits (the majority, i.e., pure amusics). Results: Results showed that, despite relative to normal performance when tested in word lexical tone test, pure amusics demonstrated inferior recognition than controls during sentence tone and intonation identification. Compared to controls, pure amusics had larger N400 amplitudes in question stimuli during tone task and smaller P600 amplitudes in intonation task. Conclusion: These data indicate that musical pitch disorder affects both tone and intonation processing during sentence processing even for pure amusics, whose lexical tone processing was intact when tested with words.
- Published
- 2024
- Full Text
- View/download PDF
29. Effects of Deep-Brain Stimulation on Speech: Perceptual and Acoustic Data
- Author
-
Yunjung Kim, Austin Thompson, and Ignatius S. B. Nip
- Abstract
Purpose: This study examined speech changes induced by deep-brain stimulation (DBS) in speakers with Parkinson's disease (PD) using a set of auditory-perceptual and acoustic measures. Method: Speech recordings from nine speakers with PD and DBS were compared between DBS-On and DBS-Off conditions using auditory-perceptual and acoustic analyses. Auditory-perceptual ratings included voice quality, articulation precision, prosody, speech intelligibility, and listening effort obtained from 44 listeners. Acoustic measures were made for voicing proportion, second formant frequency slope, vowel dispersion, articulation rate, and range of fundamental frequency and intensity. Results: No significant changes were found between DBS-On and DBS-Off for the five perceptual ratings. Four of six acoustic measures revealed significant differences between the two conditions. While articulation rate and acoustic vowel dispersion increased, voicing proportion and intensity range decreased from the DBS-Off to DBS-On condition. However, a visual examination of the data indicated that the statistical significance was mostly driven by a small number of participants, while the majority did not show a consistent pattern of such changes. Conclusions: Our data, in general, indicate no-to-minimal changes in speech production ensued from DBS stimulation. The findings are discussed with a focus on large interspeaker variability in PD in terms of their speech characteristics and the potential effects of DBS on speech.
- Published
- 2024
- Full Text
- View/download PDF
30. Speech Sound Categories Affect Lexical Competition: Implications for Analytic Auditory Training
- Author
-
Kristi Hendrickson, Katlyn Bay, Philip Combiths, Meaghan Foody, and Elizabeth Walker
- Abstract
Objectives: We provide a novel application of psycholinguistic theories and methods to the field of auditory training to provide preliminary data regarding which minimal pair contrasts are more difficult for listeners with typical hearing to distinguish in real-time. Design: Using eye-tracking, participants heard a word and selected the corresponding image from a display of four: the target word, two unrelated words, and a word from one of four contrast categories (i.e., voiced-initial [e.g., "peach-beach"], voiced-final [e.g., "back-bag"], manner-initial [e.g., "talk-sock"], and manner-final [e.g., "bat-bass"]). Results: Fixations were monitored to measure how strongly words compete for recognition depending on the contrast type (voicing, manner) and location (word-initial or final). Manner contrasts competed more for recognition than did voicing contrasts, and contrasts that occurred in word-final position were harder to distinguish than word-initial position. Conclusion: These results are an important initial step toward creating an evidence-based hierarchy for auditory training for individuals who use cochlear implants.
- Published
- 2024
- Full Text
- View/download PDF
31. Syllable Position Effects in the Perception of L2 Portuguese /l/ and /[voiced alveolar tap or flap]/ by L1-Mandarin Learners
- Author
-
Chao Zhou and Anabela Rato
- Abstract
This study reports syllable position effects on second language (L2) Portuguese speech perception, revealing that L2 segmental learning may be prone to an influence from the suprasegmental level. The results show that first language (L1) Mandarin learners had diminished performance on the discrimination between the target Portuguese liquids (/l/ and /[voiced alveolar tap or flap]/) and their position-dependent deviant productions, suggesting that the cause of their perceptual confusability differs across syllable positions. Another syllabic position effect was attested in the acquisition order (/l/[subscript onset] > /l/[subscript coda], /[voiced alveolar tap or flap]/[subscript coda] > /[voiced alveolar tap or flap]/[subscript onset]), demonstrating that an L2 sound is not mastered equally in all positions. Furthermore, we also observed that an increase in L2 experience affected only the perceptual identification accuracy of [l], but not of [[voiced alveolar tap or flap]]. This seems to suggest that L2 experience may exert different degrees of impact, depending on the L2 segments. Both theoretical and methodological implications of these results are discussed.
- Published
- 2024
- Full Text
- View/download PDF
32. The Relationship between Perception and Production of Illusory Vowels in a Second Language
- Author
-
Song Yi Kim and Jeong-Im Han
- Abstract
Korean learners of English are known to repair consonant clusters, which are not allowed in their native language, with an epenthetic vowel [close central unrounded vowel]. The purpose of the present study is to examine whether the perception-production link of such an illusory vowel in a second language (L2) is only within and not across processing levels, as proposed in a previous study regarding L2 segments. We assessed the perception and production of English onset clusters by Korean learners and native English speakers at the prelexical (AX discrimination and pseudoword read-aloud tasks) and lexical (lexical decision and picture-naming tasks) levels, using the same participants and stimuli across the tasks. Results showed that accuracy in not producing an epenthetic vowel between the two consonants of onset cluster was not significantly associated with accurate perception of the cluster either within or across processing levels. The results suggest that production and perception accuracy in L2 phonotactics are independent to a certain extent.
- Published
- 2024
- Full Text
- View/download PDF
33. On the Effects of Task Focus and Processing Level on the Perception-Production Link in Second-Language Speech Learning
- Author
-
Miquel Llompart
- Abstract
This study presents a reanalysis of existing data to investigate whether a relationship between perception and production abilities regarding a challenging second-language (L2) phonological contrast is observable (a) when both modalities must rely on accessing stored lexical representations and (b) when there is an asymmetry in task focus between perception and production. In the original studies, German learners of English were tested on their mastery of the English /[open-mid front unrounded vowel]/-/ae/ contrast in an auditory lexical decision task with phonological substitutions, a word-reading task, and a segmentally focused imitation task. Results showed that accurate nonword rejection in the lexical decision task was predicted by the Euclidean distance between the two vowels in word reading but not in imitation. These results extend previous findings to lexical perception and production, highlight the influence of task focus on the degree of coupling between the two modalities, and may have important implications for pronunciation training methods.
- Published
- 2024
- Full Text
- View/download PDF
34. Auditory Category Learning in Children with Dyslexia
- Author
-
Casey L. Roark, Vishal Thakkar, Bharath Chandrasekaran, and Tracy M. Centanni
- Abstract
Purpose: Developmental dyslexia is proposed to involve selective procedural memory deficits with intact declarative memory. Recent research in the domain of category learning has demonstrated that adults with dyslexia have selective deficits in Information-Integration (II) category learning that is proposed to rely on procedural learning mechanisms and unaffected Rule-Based (RB) category learning that is proposed to rely on declarative, hypothesis testing mechanisms. Importantly, learning mechanisms also change across development, with distinct developmental trajectories in both procedural and declarative learning mechanisms. It is unclear how dyslexia in childhood should influence auditory category learning, a critical skill for speech perception and reading development. Method: We examined auditory category learning performance and strategies in 7- to 12-year-old children with dyslexia (n = 25; nine females, 16 males) and typically developing controls (n = 25; 13 females, 12 males). Participants learned nonspeech auditory categories of spectrotemporal ripples that could be optimally learned with either RB selective attention to the temporal modulation dimension or procedural integration of information across spectral and temporal dimensions. We statistically compared performance using mixed-model analyses of variance and identified strategies using decision-bound computational models. Results: We found that children with dyslexia have an apparent selective RB category learning deficit, rather than a selective II learning deficit observed in prior work in adults with dyslexia. Conclusion: These results suggest that the important skill of auditory category learning is impacted in children with dyslexia and throughout development, individuals with dyslexia may develop compensatory strategies that preserve declarative learning while developing difficulties in procedural learning.
- Published
- 2024
- Full Text
- View/download PDF
35. Dalcroze Method and Rhythm in Music Education in Turkey
- Author
-
Apaydin, Özkan
- Abstract
The Swiss composer, academician and music educator Emile Jaquez Dalcroze brought a new perspective to education with different methods, especially, children's gaining the sense of rhythm and improvisation skills, which is called Dalcroze method in the related literature. In this study, the role and functional dimensions of Dalcroze method and the rhythm phenomenon which are envisaged in music lesson curricula and which are accepted as the basis of music as the skeleton of music were investigated in Turkey. For this purpose, the scanning method was used and both national and international sources were examined. In addition, in the study, the basic principles of the Dalcroze method and the formation and dissemination processes of the method were mentioned. The results have revealed that the philosophy and the main principles of Dalcroze method, implemented since the 1920s, appear as an approach and method that puts the student in the center. The method especially gives children the chance to learn by experience, rather than an oppressive, compelling or purely musical talent-based education approach. It focuses on an approach that supports social development, self-confidence and creativity along with their musical development can be mentioned. In addition, it has been realized that in Turkey, with the transition to constructivist education since 2005, there has been an increase in researches and applications for student-centered educational approaches. However, it is not widespread enough.
- Published
- 2023
36. Musical Hearing and the Acquisition of Foreign-Language Intonation
- Author
-
Jekiel, Mateusz and Malarski, Kamil
- Abstract
The present study seeks to determine whether superior musical hearing is correlated with successful production of second language (L2) intonation patterns. Fifty Polish speakers of English at the university level were recorded before and after an extensive two-semester accent training course in English. Participants were asked to read aloud a series of short dialogues containing different intonation patterns, complete two musical hearing tests measuring tone deafness and melody discrimination, and a survey regarding musical experience. We visually analyzed and assessed participants' intonation by comparing their F[subscript 0] contours with the model provided by their accent training teachers following ToBI (Tones and Break Indices) guidelines and compared the results with the musical hearing test scores and the survey responses. The results suggest that more accurate pitch perception can be related to more correct production of L2 intonation patterns as participants with superior musical ear produced more native-like speech contours after training, similar to those of their teachers. After dividing participants into four categories based on their musical hearing test scores and musical experience, we also observed that some students with better musical hearing test scores were able to produce more correct L2 intonation patterns. However, students with poor musical hearing test scores and no musical background also improved, suggesting that the acquisition of L2 intonation in a formal classroom setting can be successful regardless of one's musical hearing skills.
- Published
- 2023
37. Opportunity to Provide Augmented Reality Media for the Intervention of Communication, Perception, Sound, and Rhythm for Deaf Learners Based on Cultural Context
- Author
-
Subagya, Arsy Anggrellanggi, Erma Kumala Sari, and Priyono
- Abstract
The development of communication, perception, sound, and rhythm (DCPSR) is a learning subject that provides stimulation, and intervention for the appreciation of sound which is done intentionally or unintentionally so that the remnants of hearing and the feeling of vibration possessed by students with hearing impairment can be used as well as possible. This study aims to describe the current conditions for the implementation of DCPSR special schools and the need for Augmented Reality (AR)-based DCPSR media. Data was collected by distributing questionnaires through a google form by accidental sampling 131 special education teachers in Indonesia and focused group focus (FGD). Instrument validity uses content validity and reliability uses interrater reliability. The data analysis technique used the descriptive-qualitative analysis method. The results showed that 18.32% of teachers did not even understand the concept of DCPSR, ??while 72.51% of teachers thought that DCPSR needed to be taught. Schools still use conventional media (54.2%), and 99.24% of teachers feel the need and need innovative DCPSR media in the form of AR-based media, especially in communication material (oral motor) 34.45%, as well as sound and rhythm perception (detection) 46.56%. The results of the analysis show that teachers who teach the deaf need the development of AR-based DCPSR media that is easy to operate and reach, such as smartphones with additional facilities for audio, images, captions, and cues. The results of the FGD showed that prioritizing the development of this AR-based media on sound discrimination material on the essential sounds around students.
- Published
- 2023
38. Iranian EFL Teachers' Oral/Aural Skills Language Assessment Literacy: Instrument Development and Validation
- Author
-
Kobra Tavassoli and Zahra Sorat
- Abstract
Despite widespread studies on language assessment literacy (LAL), there are still many unexplored areas about LAL (Gan & Lam, 2022). One of these areas is identifying various aspects of LAL regarding different language skills and scrutinizing the English as a foreign language (EFL) teachers' involvement with these aspects. Accordingly, this study attempted to (a) explore Iranian EFL teachers' perceptions, preferences, and difficulties of oral/aural skills LAL and (b) develop a scale to measure these teachers' oral/aural skills LAL. The study was carried out in two phases. First, semi-structured interviews were conducted with 10 Iranian EFL teachers to identify their perceptions, preferences, and difficulties of oral/aural skills LAL. Second, the researchers developed a questionnaire based on a review of the literature on assessing oral/aural skills and the results of interviews. The questionnaire was reviewed by experts, revised accordingly, and administered to 150 Iranian EFL teachers who were selected through convenience sampling. The reliability of the questionnaire and its construct validity were then checked. The results of both phases of the study were compatible. The outcomes showed that almost all teachers represented dissatisfaction about their oral/aural skills LAL and they were enthusiastic to participate in assessment training courses. Furthermore, it was found that due to their lack of knowledge about oral/aural skills assessment, traditional techniques of assessment were widely used by Iranian EFL teachers.
- Published
- 2023
39. The Power of the Voice in Facilitating and Maintaining Online Presence in the Era of Zoom and Teams
- Author
-
Cribb, Michael
- Abstract
With the lockdowns of the COVID-19 pandemic and increasing popularity of videoconferencing software such as Zoom, the move to online and /or hybrid teaching has never been more rapid. With this change, however, maintaining presence in the classroom has become a great challenge simply because of the nature of online teaching. Presence is a teaching quality that enables the teacher to "own the room" and create an atmosphere of focus and inspiration. With the loss of face-to-face contact and the diminution of body language that online teaching entails, the teacher has to rely more and more on their own voice to hold presence in the class. While voice has always been an important tool in the teacher's expressive armoury, it takes on a more central role in online teaching and can be the only element that connects teachers to students. Yet many teachers still front classes where voice audio quality is severely restricted due in part to poor choice of microphone and setups on their behalf. In this article I will discuss the notion of presence in online classrooms with regard to voice, and show how teachers can maintain and manipulate this feature in order to retain appeal for students.
- Published
- 2023
40. Learning L2 Pronunciation with Google Translate
- Author
-
Khademi, Hamidreza and Cardoso, Walcir
- Abstract
This article, based on Khademi's (2021) Master's thesis, examines the use of Google Translate (GT) and its speech capabilities, Text-to-Speech Synthesis (TTS) and Automatic Speech Recognition (ASR), in helping L2 learners acquire the pronunciation of English past -ed allomorphy (/t/, /d/, /id/) in a semi-autonomous context, considering three levels of pronunciation development: phonological awareness, perception, and production. Our pre/posttest results indicate significant improvements in the participants' awareness and perception of the English past -ed, but no improvements in production (except for /id/). These findings corroborate our hypothesis that GT's speech capabilities can be used as pedagogical tools to help learners acquire the target pronunciation feature. [For the complete volume, "Intelligent CALL, Granular Systems and Learner Data: Short Papers from EUROCALL 2022 (30th, Reykjavik, Iceland, August 17-19, 2022)," see ED624779.]
- Published
- 2022
41. Using an Online High-Variability Phonetic Training Program to Develop L2 Learners' Perception of English Fricatives
- Author
-
Iino, Atsushi and Wistner, Brian
- Abstract
This study investigated the degree to which Japanese learners of English accurately perceive English fricatives over time and the extent to which fricatives were misidentified. To train and measure perception skills, an online high-variability phonetic training program was used in an English as a Foreign Language (EFL) class in Japan for eight weeks. The results indicated that learners' perception of some of the fricatives improved over time, while others remained difficult to distinguish from other fricatives. Implications for EFL pronunciation instruction are considered. [For the complete volume, "Intelligent CALL, Granular Systems and Learner Data: Short Papers from EUROCALL 2022 (30th, Reykjavik, Iceland, August 17-19, 2022)," see ED624779.]
- Published
- 2022
42. Accent Difference Makes No Difference to Phoneme Acquisition
- Author
-
Jones, Marc and Blume, Carolyn
- Abstract
ELT materials tend to use prestige variety speakers as models, an underlying assumption being that this is needed in order to acquire the phonology necessary to parse English speech (Rose & Galloway, 2019). Global Englishes Language Teaching (GELT) (Galloway & Rose, 2018) provides the potential for movement away from such 'native speaker' ideologies, but lacks empirical evidence. In this study, the use of GELT input in comparison with prestige varieties of English was investigated. Sixteen first-year L1 Japanese university students in an English Medium Instruction programme participated in a self-paced listening study via a learning management system (LMS). All participants were tested on their perception of the English vowels /ae/, /[open-mid back unrounded vowel]/, /[open-mid central unrounded vowel]/ and/ /[open-mid back rounded vowel]/. After this pretest, they were separated into two groups: using edited TED talks, the experimental group (G) (N=8) watched videos of Global English varieties, and the control group (P) (N=8) watched videos of prestige English varieties. Both groups acquired losses, i.e., immediate posttest scores were mainly lower than pretest scores on vowel identification. Scores were predicted by the variation in interval between lessons and posttest, but not by the varieties of English used. This provides support for the view that GELT is as valid a language teaching approach as using prestige varieties.
- Published
- 2022
43. Auto-Scoring of Student Speech: Proprietary vs. Open-Source Solutions
- Author
-
Daniels, Paul
- Abstract
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's speech recognition engine to transcribe speech into text which is then automatically scored using a phoneme-based algorithm. "SAM" is designed as a custom quiz type for "Moodle," a widely adopted open-source course management system. The second auto-scoring system, "EnglishCentral," is a popular proprietary language learning solution which utilizes a trained intelligibility model to automatically score speech. Results of this study indicated a positive correlation between the speaking scores generated by both systems, meaning students who scored higher on the "SAM" speaking tasks also tended to score higher on the "EnglishCentral" speaking tasks and vice versa. In addition to comparing the scores generated from these two systems against each other, students' computer-scored speaking scores were compared to human-generated scores from small-group face-to-face speaking tasks. The results indicated that students who received higher scores with the online computer-graded speaking tasks tended to score higher on the human-graded small-group speaking tasks and vice versa.
- Published
- 2022
44. Amplitude Modulation Perception and Cortical Evoked Potentials in Children with Listening Difficulties and Their Typically Developing Peers
- Author
-
Lauren Petley, Chelsea Blankenship, Lisa L. Hunter, Hannah J. Stewart, Li Lin, and David R. Moore
- Abstract
Purpose: Amplitude modulations (AMs) are important for speech intelligibility, and deficits in speech intelligibility are a leading source of impairment in childhood listening difficulties (LiD). The present study aimed to explore the relationships between AM perception and speech-in-noise (SiN) comprehension in children and to determine whether deficits in AM processing contribute to childhood LiD. Evoked responses were used to parse the neural origins of AM processing. Method: Forty-one children with LiD and 44 typically developing children, ages 8-16 years, participated in the study. Behavioral AM depth thresholds were measured at 4 and 40 Hz. SiN tasks included the Listening in Spatialized Noise-Sentences Test (LiSN-S) and a coordinate response measure (CRM)- based task. Evoked responses were obtained during an AM change detection task using alternations between 4 and 40 Hz, including the N1 of the acoustic change complex, auditory steady-state response (ASSR), P300, and a late positive response (late potential [LP]). Maturational effects were explored via age correlations. Results: Age correlated with 4-Hz AM thresholds, CRM separated talker scores, and N1 amplitude. Age-normed LiSN-S scores obtained without spatial or talker cues correlated with age-corrected 4-Hz AM thresholds and area under the LP curve. CRM separated talker scores correlated with AM thresholds and area under the LP curve. Most behavioral measures of AM perception correlated with the signal-to-noise ratio and phase coherence of the 40-Hz ASSR. AM change response time also correlated with area under the LP curve. Children with LiD exhibited deficits with respect to 4-Hz thresholds, AM change accuracy, and area under the LP curve. Conclusions: The observed relationships between AM perception and SiN performance extend the evidence that modulation perception is important for understanding SiN in childhood. In line with this finding, children with LiD demonstrated poorer performance on some measures of AM perception, but their evoked responses implicated a primarily cognitive deficit.
- Published
- 2024
- Full Text
- View/download PDF
45. Prediction in Language Processing and Learning
- Author
-
Yi-Lun Weng
- Abstract
Understanding how a child's language system develops into an adult-like system is a central question in language development research. An increasingly influential account proposes that the brain constantly generates top-down predictions and matches them against incoming input, with higher-level cognitive models serving to minimize prediction errors at lower levels of the processing hierarchy. However, the role of prediction in facilitating language development remains unclear. This dissertation aims to understand the contribution of prediction to language development by addressing the following research questions: First, we examined developmental differences in the ability to detect auditory prediction errors in auditory inputs by assessing whether adults and children differ in perceiving and detecting temporal statistics in speech using an artificial language learning task (Chapter 2). To overcome limitations of traditional prediction measures, we developed a new metric that captures individuals' real-time prediction during incremental sentence processing by combining simultaneous EEG and eye-tracking with the visual world paradigm (Chapter 3). Applying these real-time prediction metrics, we investigated the relationship between linguistic prediction and learning performance in adults (Chapter 4) and children (Chapter 5). In Chapter 4, we found that the efficacy of updating existing verb biases in skilled language users is associated with 1) real-time predictive processes during learning, 2) predictive ability in a different context, and 3) prediction errors experienced during learning. In Chapter 5, we examined the relationship between prediction and verb bias learning in school-age children, who have developed verb biases but show greater sensitivity to distributional information than adults. We found that children's ability to learn novel combinatorial information about known verbs is closely tied to their prediction skills, potentially even more so than adults. Collectively, these studies further knowledge about the role of prediction in supporting real-time language processing and development, providing novel insights into whether and how prediction occurs over the course of language development and during real-time language processing. Ultimately, these findings shed light on prediction-based learning frameworks and suggest directions for future research in language development. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
46. Exploring Audiovisual Speech Perception and Literacy Skills in Monolingual and Bilingual Children in Uzbekistan
- Author
-
Shakhlo Nematova
- Abstract
Prior research has extensively explored audiovisual speech perception and literacy skills in various linguistic contexts. Studies have shown that children's ability to integrate auditory and visual speech cues plays a crucial role in language acquisition and development (Erdener & Burnham, 2013; Lalonde & Werner, 2021). Furthermore, research highlights the importance of visual speech cues for phonological awareness and reading skills (Erdener & Burnham, 2013; Burnham, 2003). However, much of this research has been conducted within English and Indo-European language contexts (Kidd & Gracia, 2022; Share 2008), potentially limiting the generalizability of current language and literacy theories to a broader range of languages and linguistic contexts. Hence, this study aims to advance our comprehension of language and literacy development by investigating the connections between bilingualism, multimodal speech perception, and reading proficiency in the context of Uzbekistan. This research focuses on Uzbek and Russian languages, which have received comparatively less scholarly attention, thus offering a unique opportunity to enrich existing knowledge on language acquisition and literacy in diverse linguistic settings. The first study focusing on audiovisual speech perception skills in monolingual and bilinguals found that bilingualism did not significantly impact audiovisual speech perception and that the degree of cross-linguistic similarity between languages might influence the extent of reliance on visual speech cues in bilingual children, highlighting the importance of considering language-specific factors in audiovisual speech perception. The second study delved into how reading skills predict sensitivity to audiovisual and auditory speech cues, showing that reading skills predict heightened sensitivity to both audiovisual and auditory speech cues in the context of languages with transparent orthographies. The third study revealed that bilingual individuals demonstrate a reading advantage when tested in instructional languages but not in noninstructional languages; this study also showed the bidirectional cross-linguistic transfer of literacy skills across languages is possible, regardless of writing script differences. Overall, these studies collectively broaden our understanding of language acquisition and literacy development, emphasizing the significance of linguistic diversity, crosslinguistic similarities, and contextual factors in shaping children's language and literacy development. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
47. The Relationship between Speech Accuracy and Linguistic Measures in Narrative Retells of Children with Speech Sound Disorders
- Author
-
Julie Case and Anna Eva Hallin
- Abstract
Background: Speech and language are interconnected systems, and language disorder often co-occurs with childhood apraxia of speech (CAS) and non-CAS speech sound disorders (SSDs). Potential trade-off effects between speech and language in connected speech in children without overt language disorder have been less explored. Method: Story retell narratives from 24 children (aged 5;0-6;11 [years;months]) with CAS, non-CAS SSD, and typical development were analyzed in Systematic Analysis of Language Transcripts (SALT) regarding morphosyntactic complexity (mean length of C-unit in words [MLCU]), lexical diversity (moving-average type-token ratio [MATTR]), and linguistic accuracy (any linguistic error/bound morpheme omissions) and compared to 128 age-matched children from the SALT database. Linear and mixed-effects logistic regressions were performed with speech accuracy (percent phonemes correct [PPC]) and diagnostic group as predictors of the narrative variables. Results: PPC predicted all narrative variables. Poorer PPC was associated with lower MLCU and MATTR as well as a higher likelihood of linguistic errors. Group differences were only observed for the error variables. Comparison to the SALT database indicated that 13 of 16 children with CAS and SSD showed a higher-than-expected proportion of linguistic errors, with a small proportion explained by individual speech errors only. Conclusions: The high occurrence of linguistic errors, combined with the relationship between PPC and linguistic errors in children with CAS/SSD, suggests a trade-off between speech accuracy and language output. Longitudinal studies are needed to investigate whether children with SSDs without language disorder show more language difficulties over time as linguistic demands increase.
- Published
- 2024
- Full Text
- View/download PDF
48. Dynamic Temporal and Tactile Cueing: Quantifying Speech Motor Changes and Individual Factors That Contribute to Treatment Gains in Childhood Apraxia of Speech
- Author
-
Maria I. Grigos, Julie Case, Ying Lu, and Zhuojun Lyu
- Abstract
Purpose: Speech motor skill is refined over the course of practice, which is commonly reflected by increased accuracy and consistency. This research examined the relationship between auditory-perceptual ratings of word accuracy and measures of speech motor timing and variability at pre- and posttreatment in children with childhood apraxia of speech (CAS). Furthermore, the degree to which individual patterns of baseline probe word accuracy, receptive language, and cognition predicted response to treatment was explored. Method: Probe data were collected from seven children with CAS (aged 2;5-- 5;0 [years;months]) who received 6 weeks of Dynamic Temporal and Tactile Cueing (DTTC) treatment. Using a multidimensional approach to measuring speech performance, auditory-perceptual (whole-word accuracy), acoustic (whole-word duration), and kinematic (jaw movement variability) analyses were conducted on probe words produced pre- and posttreatment. Standardized tests of receptive language and cognition were administered pretreatment. Results: There was a negative relationship between auditory-perceptual measures of word accuracy and movement variability. Higher word accuracy was associated with lower jaw movement variability following intervention. There was a strong relationship between word accuracy and word duration at baseline, which became less robust posttreatment. Furthermore, baseline word accuracy was the only child-specific factor to predict response to DTTC treatment. Conclusions: Following a period of motor-based intervention, children with CAS appeared to refine speech motor control in conjunction with improvements in word accuracy. Those who demonstrated the poorest performance at treatment onset displayed the greatest degree of gains. Taken together, these results reflect a system-wide change following motor-based intervention.
- Published
- 2024
- Full Text
- View/download PDF
49. Naive Listener Ratings of Speech Intelligibility over the Course of Motor-Based Intervention in Children with Childhood Apraxia of Speech
- Author
-
Emily W. Wang and Maria I. Grigos
- Abstract
Purpose: The aim of this study was to describe changes in speech intelligibility and interrater and intrarater reliability of naive listeners' ratings of words produced by young children diagnosed with childhood apraxia of speech (CAS) over a period of motor-based intervention (dynamic temporal and tactile cueing [DTTC]). Method: A total of 120 naive listeners (i.e., listeners without experience listening to children with speech and/or language impairments; age range: 18--45 years) orthographically transcribed single-word productions by five children (age range: 2;6-3;11 [years;months]) across three time points over an intervention period (baseline, post-treatment, maintenance). Changes in intelligibility and interrater and intrarater reliability were examined within and across time points. Results: Speech intelligibility significantly increased in children with CAS over the course of treatment, and these gains were also maintained at 6 weeks posttreatment. There was poor-to-fair consistency "between" listeners (interrater reliability) and excellent consistency within listeners (intrarater reliability) in ratings of speech intelligibility within and across time points. Conclusions: Motor-based intervention increases speech intelligibility following a period of DTTC treatment. Variability among naive listeners of speech intelligibility was also present, with intrarater reliability (within listeners) yielding greater consistency than interrater reliability (between listeners). The implications for including naive listeners as raters of speech intelligibility for research and clinical purposes are discussed.
- Published
- 2024
- Full Text
- View/download PDF
50. Dual-Task Interference in the Assessment of Listening Effort: Results of Normal-Hearing Adults, Cochlear Implant Users, and Hearing Aid Users
- Author
-
Dorien Ceuleers, Sofie Degeest, Freya Swinnen, Nele Baudonck, Katrien Kestens, Ingeborg Dhooge, and Hannah Keppler
- Abstract
Purpose: The purpose of the current study was to assess dual-task interference (i.e., changes between the dual-task and baseline condition) in a listening effort dual-task paradigm in normal-hearing (NH) adults, hearing aid (HA) users, and cochlear implant (CI) users. Method: Three groups of 31 participants were included: (a) NH adults, (b) HA users, and (c) CI users. The dual-task paradigm consisted of a primary speech understanding task in a quiet condition, and a favorable and unfavorable noise condition, and a secondary visual memory task. Dual-task interference was calculated for both tasks, and participants were classified based on their patterns of interference. Descriptive analyses were established and differences between the three groups were examined. Results: The descriptive results showed varying patterns of dual-task interference between the three listening conditions. Most participants showed the pattern of visual memory interference (i.e., worse results for the secondary task in the dual-task condition and no difference for the primary task) in the quiet condition, whereas the pattern of speech understanding priority trade-off (i.e., worse results for the secondary task in the dual-task condition and better results for the primary task) was most prominent in the unfavorable noise condition. Particularly, in HA and CI users, this shift was seen. However, the patterns of dual-task interference were not statistically different between the three groups. Conclusions: Results of this study may provide additional insight into the interpretation of dual-task paradigms for measuring listening effort in diverse participant groups. It highlights the importance of considering both the primary and secondary tasks for accurate interpretation of results.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.