12,618 results on '"Speech Production"'
Search Results
2. TEleRehabilitation foR Aphasia (TERRA) phase II trial design
- Author
-
Cassarly, Christy, Basilakos, Alexandra, Johnson, Lisa, Wilmskoetter, Janina, Elm, Jordan, Hillis, Argye E., Bonilha, Leonardo, Rorden, Chris, Hickok, Gregory, den Ouden, Dirk-Bart, and Fridriksson, Julius
- Published
- 2024
- Full Text
- View/download PDF
3. Effects of testosterone on speech production and perception: Linking hormone levels in males to vocal cues and female voice attractiveness ratings
- Author
-
Weirich, Melanie, Simpson, Adrian P., and Knutti, Nadine
- Published
- 2024
- Full Text
- View/download PDF
4. Common Coding of Speech Imitation
- Author
-
Adank, Patti, Wilt, Hannah, Genschow, Oliver, editor, and Cracco, Emiel, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Mapping subcortical brain lesions, behavioral and acoustic analysis for early assessment of subacute stroke patients with dysarthria.
- Author
-
Liu, Juan, Ruzi, Rukiye, Jian, Chuyao, Wang, Qiuyu, Zhao, Shuzhi, Ng, Manwa L., Zhao, Shaofeng, Wang, Lan, and Yan, Nan
- Subjects
GLOBUS pallidus ,SPEECH disorders ,BASAL ganglia ,CAUDATE nucleus ,VOXEL-based morphometry - Abstract
Introduction: Dysarthria is a motor speech disorder frequently associated with subcortical damage. However, the precise roles of the subcortical nuclei, particularly the basal ganglia and thalamus, in the speech production process remain poorly understood. Methods: The present study aimed to better understand their roles by mapping neuroimaging, behavioral, and speech data obtained from subacute stroke patients with subcortical lesions. Multivariate lesion-symptom mapping and voxel-based morphometry methods were employed to correlate lesions in the basal ganglia and thalamus with speech production, with emphases on linguistic processing and articulation. Results: The present findings revealed that the left thalamus and putamen are significantly correlated with concept preparation (r = 0.64, p < 0.01) and word retrieval (r = 0.56, p < 0.01). As the difficulty of the behavioral tasks increased, the influence of cognitive factors on early linguistic processing gradually intensified. The globus pallidus and caudate nucleus were found to significantly impact the movements of the larynx (r = 0.63, p < 0.01) and tongue (r = 0.59, p = 0.01). These insights underscore the complex and interconnected roles of the basal ganglia and thalamus in the intricate processes of speech production. The lateralization and hierarchical organization of each nucleus are crucial to their contributions to these speech functions. Discussion: The present study provides a nuanced understanding of how lesions in the basal ganglia and thalamus impact various stages of speech production, thereby enhancing our understanding of the subcortical neuromechanisms underlying dysarthria. The findings could also contribute to the identification of multimodal assessment indicators, which could aid in the precise evaluation and personalized treatment of speech impairments. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. Dissecting the Causal Role of Early Inferior Frontal Activation in Reading.
- Author
-
Tomoki Uno, Kouji Takano, and Kimihiro Nakamura
- Subjects
- *
TRANSCRANIAL magnetic stimulation , *PARIETAL lobe , *FRONTAL lobe , *NEURAL pathways , *ORAL reading - Abstract
Cognitive models of reading assume that speech production occurs after visual and phonological processing of written words. This traditional view is at odds with more recent magnetoencephalography studies showing that the left posterior inferior frontal cortex (pIFC) classically associated with spoken production responds to print at 100–150 ms after word-onset, almost simultaneously with posterior brain regions for visual and phonological processing. Yet the theoretical significance of this fast neural response remains open to date. We used transcranial magnetic stimulation (TMS) to investigate how the left pIFC contributes to the early stage of reading. In Experiment 1, 23 adult participants (14 females) performed three different tasks about written words (oral reading, semantic judgment, and perceptual judgment) while single-pulse TMS was delivered to the left pIFC, fusiform gyrus or supramarginal gyrus at different time points (50–200 ms after word-onset). A robust double dissociation was found between tasks and stimulation sites—oral reading, but not other control tasks, was disrupted only when TMS was delivered to pIFC at 100 ms. This task-specific impact of pIFC stimulation was further corroborated in Experiment 2, which revealed another double dissociation between oral reading and picture naming. These results demonstrate that the left pIFC specifically and causally mediates rapid computation of speech motor codes at the earliest stage of reading and suggest that this fast sublexical neural pathway for pronunciation, although seemingly dormant, is fully functioning in literate adults. Our results further suggest that these left-hemisphere systems for reading overall act faster than known previously. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Syllable as a Synchronization Mechanism That Makes Human Speech Possible.
- Author
-
Xu, Yi
- Subjects
- *
SPEECH disorders , *SPEECH , *MOTOR ability , *DEGREES of freedom , *MOTOR learning - Abstract
Speech is a highly skilled motor activity that shares a core problem with other motor skills: how to reduce the massive degrees of freedom (DOF) to the extent that the central nervous control and learning of complex motor movements become possible. It is hypothesized in this paper that a key solution to the DOF problem is to eliminate most of the temporal degrees of freedom by synchronizing concurrent movements, and that this is performed in speech through the syllable—a mechanism that synchronizes consonantal, vocalic, and laryngeal gestures. Under this hypothesis, syllable articulation is enabled by three basic mechanisms: target approximation, edge-synchronization, and tactile anchoring. This synchronization theory of the syllable also offers a coherent account of coarticulation, as it explicates how various coarticulation-related phenomena, including coarticulation resistance, locus, locus equation, diphone, etc., are byproducts of syllable formation. It also provides a theoretical basis for understanding how suprasegmental events such as tone, intonation, phonation, etc., are aligned to segmental events in speech. It may also have implications for understanding vocal learning, speech disorders, and motor control in general. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Consonant articulation accuracy in paediatric cochlear implant recipients.
- Author
-
Lighterink, Mackenzie, Bunta, Ferenc, Gifford, René H., and Camarata, Stephen
- Subjects
- *
CHILDREN'S language , *COCHLEAR implants , *ARTICULATION (Speech) , *ORAL communication , *SPEECH - Abstract
Hearing loss is a significant risk factor for delays in the spoken language development of children. The purpose of this study was to examine the distribution of articulation errors for English consonants among children with cochlear implants (CIs) who utilise auditory-oral communication. Speech samples from 45 prelingually deafened paediatric CI users were obtained using a single-word picture elicitation task. Samples were audio recorded and transcribed in PRAAT; overall percentage consonant correct (PCC) and individual phoneme error patterns were examined. Results showed an average PCC of 76.49% and participants exhibited lower accuracy in producing several consonants that are late acquired by typically hearing and developing children. In comparison to previous studies of children with severe-to-profound hearing loss who did not use CIs, participants in this study were more accurate in their production of most fricatives and affricates. Surprisingly, some phonemes that are acquired early in typically developing populations and that have high visual salience, specifically the nasal bilabial stop, /m/, were produced with relatively lower accuracy than would be expected based on published research. In comparison to voiced and voiceless oral bilabial stops, /b/ and /p/, /m/ was subject to substantially more place of articulation errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Unveiling the neuroplastic capacity of the bilingual brain: insights from healthy and pathological individuals.
- Author
-
Quiñones, Ileana, Gisbert-Muñoz, Sandra, Amoruso, Lucía, Manso-Ortega, Lucia, Mori, Usue, Bermudez, Garazi, Robles, Santiago Gil, Pomposo, Iñigo, and Carreiras, Manuel
- Subjects
- *
DOMINANT language , *DEFAULT mode network , *LINGUISTICS , *SPEECH , *BRAIN tumors , *BILINGUALISM - Abstract
Research on the neural imprint of dual-language experience, crucial for understanding how the brain processes dominant and non-dominant languages, remains inconclusive. Conflicting evidence suggests either similarity or distinction in neural processing, with implications for bilingual patients with brain tumors. Preserving dual-language functions after surgery requires considering pre-diagnosis neuroplastic changes. Here, we combine univariate and multivariate fMRI methodologies to test a group of healthy Spanish-Basque bilinguals and a group of bilingual patients with gliomas affecting the language-dominant hemisphere while they overtly produced sentences in either their dominant or non-dominant language. Findings from healthy participants revealed the presence of a shared neural system for both languages, while also identifying regions with distinct language-dependent activation and lateralization patterns. Specifically, while the dominant language engaged a more left-lateralized network, speech production in the non-dominant language relied on the recruitment of a bilateral basal ganglia-thalamo-cortical circuit. Notably, based on language lateralization patterns, we were able to robustly decode (AUC: 0.80 ± 0.18) the language being used. Conversely, bilingual patients exhibited bilateral activation patterns for both languages. For the dominant language, regions such as the cerebellum, thalamus, and caudate acted in concert with the sparsely activated language-specific nodes. In the case of the non-dominant language, the recruitment of the default mode network was notably prominent. These results demonstrate the compensatory engagement of non-language-specific networks in the preservation of bilingual speech production, even in the face of pathological conditions. Overall, our findings underscore the pervasive impact of dual-language experience on brain functional (re)organization, both in health and disease. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Contrastive Alveolar/Retroflex Phonemes in Singapore Mandarin Bilinguals: Comprehension Rates for Articulations in Different Accents, and Acoustic Analysis of Productions.
- Author
-
Goh, Hannah L., Woon, Fei Ting, Moisik, Scott R., and Styles, Suzy J.
- Subjects
- *
DIALECTS , *SOUND spectrography , *PROMPTS (Psychology) , *RESEARCH funding , *PHONOLOGICAL awareness , *INTELLIGIBILITY of speech , *DESCRIPTIVE statistics , *MULTILINGUALISM , *SPEECH evaluation , *PHONETICS , *SPEECH perception , *HUMAN voice ,PHYSIOLOGICAL aspects of speech - Abstract
The standard Beijing variety of Mandarin has a clear alveolar–retroflex contrast for phonemes featuring voiceless sibilant frication (i.e., /s/, /ʂ/, /ʈs/, /ʈʂ/, /ʈsʰ/, /ʈʂʰ/). However, some studies show that varieties in the 'outer circle', such in Taiwan, have a reduced contrast for these speech sounds via a process known as 'deretroflexion'. The variety of Mandarin spoken in Singapore is also considered as 'outer circle', as it exhibits influences from Min Nan varieties. We investigated how bilinguals of Singapore Mandarin and English perceive and produce speech tokens in minimal pairs differing only in the alveolar/retroflex place of articulation. In all, 50 participants took part in two tasks. In Task 1, participants performed a lexical identification task for minimal pairs differing only the alveolar/retroflex place of articulation, as spoken by native speakers of two varieties: Beijing Mandarin and Singapore Mandarin. No difference in comprehension of the words was observed between the two varieties indicating that both varieties contain sufficient acoustic information for discrimination. In Task 2, participants read aloud from the list of minimal pairs while their voices were recorded. Acoustic analysis revealed that the phonemes do indeed differ acoustically in terms of center of gravity of the frication and in an alternative measure: long-term averaged spectra. The magnitude of this difference appears to be smaller than previously reported differences for the Beijing variety. These findings show that although some deretroflexion is evident in the speech of bilinguals of the Singaporean variety of Mandarin, it does not translate to ambiguity in the speech signal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Effects of irrelevant unintelligible and intelligible background speech on spoken language production.
- Author
-
He, Jieying, Frances, Candice, Creemers, Ava, and Brehm, Laurel
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Psychology ,Clinical Research ,Behavioral and Social Science ,Basic Behavioral and Social Science ,Irrelevant speech effect ,name agreement ,speech production ,Cognitive Sciences ,Experimental Psychology ,Biological psychology ,Cognitive and computational psychology - Abstract
Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top-down attentional engagement.
- Published
- 2024
12. Pitch corrections occur in natural speech and are abnormal in patients with Alzheimer's disease
- Author
-
Subrahmanya, Anantajit, Ranasinghe, Kamalini G, Kothare, Hardik, Raharjo, Inez, Kim, Kwang S, Houde, John F, and Nagarajan, Srikantan S
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Psychology ,Alzheimer's Disease including Alzheimer's Disease Related Dementias (AD/ADRD) ,Dementia ,Neurodegenerative ,Brain Disorders ,Acquired Cognitive Impairment ,Alzheimer's Disease ,Neurosciences ,Clinical Research ,Behavioral and Social Science ,Aging ,Neurological ,speech production ,speech perception ,speech control ,auditory feedback ,Alzheimer's disease ,Cognitive Sciences ,Experimental Psychology ,Biological psychology ,Cognitive and computational psychology - Abstract
Past studies have explored formant centering, a corrective behavior of convergence over the duration of an utterance toward the formants of a putative target vowel. In this study, we establish the existence of a similar centering phenomenon for pitch in healthy elderly controls and examine how such corrective behavior is altered in Alzheimer's Disease (AD). We found the pitch centering response in healthy elderly was similar when correcting pitch errors below and above the target (median) pitch. In contrast, patients with AD showed an asymmetry with a larger correction for the pitch errors below the target phonation than above the target phonation. These findings indicate that pitch centering is a robust compensation behavior in human speech. Our findings also explore the potential impacts on pitch centering from neurodegenerative processes impacting speech in AD.
- Published
- 2024
13. Association Experiment and Its Role in Cognitive Studies
- Author
-
Irina A. Fedortseva and Inna V. Tubalova
- Subjects
association experiment ,consciousness ,speech production ,thinking ,association experiment methods ,frame ,cognitive speech disorders ,Education (General) ,L7-991 - Abstract
The method of association experiment makes it possible to study human cognition and consciousness. The article reviews the particularities of the association experiment and its significance for cognitive sciences. It analyses the methods and principles of association experiment as a method of studying the bonds between psychical processes, cognitive functions, and speech. The author combined standard research methods with those of psycholinguistics to describe various types of association experiments. Association experiments yield important empirical results for the analysis and interpretation of human cognitive structures, thus boosting the development of cognitive sciences. Being interdisciplinary in nature, they provide data about the psychological, cultural, and linguistic personality, i.e., speech and mental operations. Association experiments can be of four types: free, chain, directed, and with continuous reaction. Each type has its own advantages and disadvantages. As auditory or visual, they can be written-to-written, oral-to-oral, written-to-oral, and oral-to-written. The stimulus can be verbal, nonverbal, or heterosemiotic.
- Published
- 2024
- Full Text
- View/download PDF
14. Spatiotemporal Mapping of Auditory Onsets during Speech Production.
- Author
-
Kurteff, Garret Lynn, Field, Alyssa M., Asghar, Saman, Tyler-Kabara, Elizabeth C., Clarke, Dave, Weiner, Howard L., Anderson, Anne E., Watrous, Andrew J., Buchanan, Robert J., Modur, Pradeep N., and Hamilton, Liberty S.
- Subjects
- *
SPEECH , *SPEECH perception , *TEMPORAL lobe , *AUDITORY pathways , *AUDITORY cortex - Abstract
The human auditory cortex is organized according to the timing and spectral characteristics of speech sounds during speech perception. During listening, the posterior superior temporal gyrus is organized according to onset responses, which segment acoustic boundaries in speech, and sustained responses, which further process phonological content. When we speak, the auditory system is actively processing the sound of our own voice to detect and correct speech errors in real time. This manifests in neural recordings as suppression of auditory responses during speech production compared with perception, but whether this differentially affects the onset and sustained temporal profiles is not known. Here, we investigated this question using intracranial EEG recorded from seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy while they performed a reading/listening task. We identified onset and sustained responses to speech in the bilateral auditory cortex and observed a selective suppression of onset responses during speech production. We conclude that onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production and are therefore suppressed. Phonological feature tuning in these “onset suppression” electrodes remained stable between perception and production. Notably, auditory onset responses and phonological feature tuning were present in the posterior insula during both speech perception and production, suggesting an anatomically and functionally separate auditory processing zone that we believe to be involved in multisensory integration during speech perception and feedback control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Sensorimotor adaptation to a nonuniform formant perturbation generalizes to untrained vowels.
- Author
-
Parrell, Benjamin, Niziolek, Caroline A., and Chen, Taijing
- Subjects
- *
SPEECH , *VOWELS , *TRANSFER of training , *GENERALIZATION , *INSTRUCTIONAL systems - Abstract
When speakers learn to change the way they produce a speech sound, how much does that learning generalize to other speech sounds? Past studies of speech sensorimotor learning have typically tested the generalization of a single transformation learned in a single context. Here, we investigate the ability of the speech motor system to generalize learning when multiple opposing sensorimotor transformations are learned in separate regions of the vowel space. We find that speakers adapt to a nonuniform "centralization" perturbation, learning to produce vowels with greater acoustic contrast, and that this adaptation generalizes to untrained vowels, which pattern like neighboring trained vowels and show increased contrast of a similar magnitude. NEW & NOTEWORTHY: We show that sensorimotor adaptation of vowels at the edges of the articulatory working space generalizes to intermediate vowels through local transfer of learning from adjacent vowels. These results extend findings on the locality of sensorimotor learning from upper limb control to speech, a complex task with an opaque and nonlinear transformation between motor actions and sensory consequences. Our results also suggest that our paradigm has potential to drive behaviorally relevant changes that improve communication effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Inner speech as a cognitive tool—or what is the point of talking to oneself?
- Author
-
Kompa, Nikola A. and Mueller, Jutta L.
- Subjects
- *
COGNITIVE load , *SPEECH , *ONTOGENY , *DELIBERATION , *PRAGMATICS - Abstract
Many higher cognitive tasks, such as problem-solving and deliberation are accompanied by the experience of an inner voice in our heads. In this paper, we develop the idea that our own inner speech supports these tasks. We approach the phenomenon of inner speech through a comparison with overt speech – a comparison that is suggested by a Vygotskian approach that focuses on how inner speech develops from overt speech during ontogeny. We argue for the cognitive potency of inner speech by hypothesizing that condensed inner speech may help reduce cognitive load; yet by being pragmatically expanded upon, inner speech may scaffold further cognitive accomplishments. More specifically, as we adopt a Vygotskian perspective, we begin by introducing his notion of internalization and examine in what way inner speech might be condensed. Then the question of whether inner speech is governed by the same pragmatic mechanisms that govern overt speech is posed and answered in the affirmative. We advocate the idea that inner speech aids deliberation due, to some degree, to the manner in which pragmatic principles governing overt speech are re-purposed in inner speech. We close by addressing two objections. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Representation of the ICF in research of speech intelligibility: A systematic review of literature describing deaf and hard-of-hearing children.
- Author
-
Magnússon, Egill, Crowe, Kathryn, Stefánsdóttir, Harpa, Guiberson, Mark, Másdóttir, Thora, Ágústsdóttir, Inga, and Baldursdóttir, Ösp Vilberg
- Subjects
- *
INTELLIGIBILITY of speech , *SPEECH perception , *DEAF children , *HEARING impaired , *HEARING aids - Abstract
AbstractPurposeMethodResultConclusionThe purpose of this review was to map speech intelligibility measures used for assessing d/Deaf and hard-of-hearing children onto the International Classification of Functioning, Disability and Health.This review considered perceptual speech intelligibility measures (Articulation functions b320) used to assess deaf and hard-of-hearing children aged 12 years and younger. The following electronic databases were searched: CINAHL; ERIC (ProQuest); Linguistic, Language, and Behaviour Abstracts; Scopus; Medline via PubMed; CENTRAL via Ovid; Cochrane via Ovid; and Joanna Briggs via Ovid. Data were extracted describing the article, participant, listener, study, speech intelligibility, and psychometric characteristics from the 245 included studies.Speech intelligibility was measured as articulation functions (b320) through speaking (d330) in all studies. Other Body Functions frequently measured were speech discrimination (b2304; 28%) and mental functions of language (b167; 27%). Activities and Participation factors other than speaking d330 were generally not considered. Speech intelligibility was most often measured in the context of health services (e5800; 66%).Previous research on the speech intelligibility of deaf and hard-of-hearing children has largely lacked a broader perspective of functioning. Clinicians and educators of deaf and hard-of-hearing children should consider Activities and Participation, Environmental, and Personal Factors when assessing speech intelligibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Prosodic Features in Production Reflect Reading Comprehension Skill in High School Students.
- Author
-
Breen, Mara, Van Dyke, Julie, Krivokapić, Jelena, and Landi, Nicole
- Abstract
Young children's prosodic fluency correlates with their reading ability, as children who are better early readers also produce more adult-like prosodic cues to syntactic and semantic structure. But less work has explored this question for high school readers, who are more proficient readers, but still exhibit wide variability in reading comprehension skill and prosodic fluency. In the current study, we investigated acoustic indices of prosodic production in high school students (N = 40; ages 13–19) exhibiting a range of reading comprehension skill. Participants read aloud a series of 12 short stories which included simple statements, wh-questions, yes–no questions, quotatives, and ambiguous and unambiguous multiclausal sentences. In addition, to assess the contribution of discourse coherence, sentences were read in either canonical or randomized order. Acoustic cues known to index prosodic phenomena—duration, fundamental frequency, and intensity—were extracted and compared across structures and participants. Results demonstrated that high school readers as a group consistently signal syntactic and semantic structure with prosody, and that reading comprehension skill, above and beyond lower-level skills, correlates with prosodic fluency, as better comprehenders produced stronger prosodic cues. However, discourse coherence did not produce consistent effects. These results strengthen the finding that prosodic fluency and reading comprehension are linked, even for older, proficient readers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Speech production and perception data collection in R: A tutorial for web-based methods using speechcollectr.
- Author
-
Thomas, Abbey L. and Assmann, Peter F.
- Subjects
- *
SPEECH , *PROGRAMMING languages , *SOUND recordings , *ACQUISITION of data , *EMOTIONS - Abstract
This tutorial is designed for speech scientists familiar with the R programming language who wish to construct experiment interfaces in R. We begin by discussing some of the benefits of building experiment interfaces in R—including R's existing tools for speech data analysis, platform independence, suitability for web-based testing, and the fact that R is open source. We explain basic concepts of reactive programming in R, and we apply these principles by detailing the development of two sample experiments. The first of these experiments comprises a speech production task in which participants are asked to read words with different emotions. The second sample experiment involves a speech perception task, in which participants listen to recorded speech and identify the emotion the talker expressed with forced-choice questions and confidence ratings. Throughout this tutorial, we introduce the new R package speechcollectr, which provides functions uniquely suited to web-based speech data collection. The package streamlines the code required for speech experiments by providing functions for common tasks like documenting participant consent, collecting participant demographic information, recording audio, checking the adequacy of a participant's microphone or headphones, and presenting audio stimuli. Finally, we describe some of the difficulties of remote speech data collection, along with the solutions we have incorporated into speechcollectr to meet these challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. ¿Soy de Ribera o Rivera?: Sociolinguistic /b/-/v/ Variation in Rivera Spanish.
- Author
-
Araujo, Vanina Machado and Ward, Owen
- Subjects
SPANISH language ,SPEECH ,LANGUAGE contact ,SPANIARDS ,BILINGUALISM ,SOCIOLINGUISTICS - Abstract
This study investigates the impact of language contact on three generations of bilingual Spanish and Uruguayan Portuguese speakers in Rivera City, Uruguay, located on the Uruguayan–Brazilian border. Focusing on the confirmed presence of the Portuguese-like/b/and/v/phonemic distinction, and the lower frequency of the Montevideo Spanish-like approximantized stops in Riverense Spanish (RS), the research examines the production of
and b in 29 female Rivera Spanish bilinguals belonging to different age groups. More specifically, the aim was to see if the previously observed differential use of language-specific phonological variants could be accounted for by using precise measurements of relative intensity, duration, and voicing coupled with a distributional analysis of realizations derived from auditory coding. At the same time, their production is compared to that of 30 monolingual Montevideo Spanish (MS) speakers, who served as the control group, offering a first description of the production of and b within this distinct Rioplatense Spanish variety. Riverense's higher overall relative intensity, duration, and voicing values support auditory coding results, providing evidence of the expected phonological differences between both Uruguayan Spanish varieties. In particular, an exclusive presence of fricative/v/and less approximantization of/b/in RS speech exposed the influence of Portuguese in Rivera bilinguals and their divergence from MS. In addition, as predicted, the findings reveal a higher presence of Portuguese-like productions of [v] and [b] in older bilinguals when compared to younger generations. This illustrates a continuum from Portuguese-like forms to Spanish-like forms, which is confirmed by both acoustic and distributional analyses. Finally, evidence of the existence of innovative forms resulting from mixing Portuguese and Spanish phonological systems in RS are presented. This study's findings contribute to sociolinguistics and bilingualism by exposing cross-linguistic influence in a border setting with rigorous analytical methods that offer reliable results and go beyond a basic analysis based on auditory identification. [ABSTRACT FROM AUTHOR] - Published
- 2024
- Full Text
- View/download PDF
21. First-language interference without bilingualism? Evidence from second language vowel production in international adoptees.
- Subjects
- *
VOWELS , *COMPARATIVE grammar , *RESEARCH funding , *PHONOLOGICAL awareness , *PSYCHOLOGICAL adaptation , *MULTILINGUALISM , *SPEECH evaluation , *MEMORY ,PHYSIOLOGICAL aspects of speech - Abstract
The ability to acquire the speech sounds of a second language has consistently been found to be constrained with increasing age of acquisition. Such constraints have been explained either through cross-linguistic influence in bilingual speakers or as the result of maturational declines in neural plasticity with age. Here, we disentangle these two explanations by investigating speech production in adults who were adopted from China to Sweden as toddlers, lost their first language, and became monolingual speakers of the second language. Although we find support for predictions based on models of bilingual language acquisition, these results cannot be explained by the bilingual status of the learners, indicating instead a long-term influence of early specialization for speech that is independent of bilingual language use. These findings are discussed in light of first-language interference and the theory of maturational constraints for language acquisition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Visual Upward/Downward Motion Elicits Fast and Fluent High-/Low-Pitched Speech Production.
- Author
-
Suzuki, Yusuke and Nagai, Masayoshi
- Subjects
- *
VISUAL perception , *SPEECH , *SEMANTICS , *MANUFACTURING processes , *VOWELS - Abstract
Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch–motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch–motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch–motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch–motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Engagement of the speech motor system in challenging speech perception: Activation likelihood estimation meta‐analyses.
- Author
-
Perron, Maxime, Vuong, Veronica, Grassi, Madison W., Imran, Ashna, and Alain, Claude
- Subjects
- *
SPEECH perception , *EXECUTIVE function , *PREFRONTAL cortex , *PERCEIVED control (Psychology) , *COGNITIVE ability - Abstract
The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate‐based meta‐analyses to investigate the neural overlap between speech production and three speech perception conditions: speech‐in‐noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre‐supplementary motor area (pre‐SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre‐SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta‐analysis reveals context‐independent (FOC, PT) and context‐dependent (pre‐SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Production of English vowel duration by multilingual speakers of Namibian English: Namibian English vowel durations.
- Author
-
Haapanen, Katja, Saloranta, Antti, Peltola, Kimmo U., Tamminen, Henna, Uwu-khaeb, Lannie, and Peltola, Maija S.
- Subjects
- *
VOWELS , *MULTILINGUALISM , *NAMIBIAN literature - Abstract
The aim of this study was to examine spoken Namibian English by investigating how multilingual Namibian speakers produce vowel durations in pre-lenis and pre-fortis positions, and how those vowel durations compare to British English vowel durations in the same words. In British English and most other English varieties, vowel duration is affected by the voicing of the following consonant, so that vowels preceding phonologically voiced consonants are longer (pre-lenis lengthening) and vowels preceding phonologically voiceless consonants are shorter (pre-fortis clipping). The production data was collected using orthographic stimuli that were monosyllabic English words with voiced and voiceless final consonants after the target vowels. The data were collected from 14 multilingual Namibian English speakers. The vowel durations produced by the speakers in pre-lenis and pre-fortis position were first compared to each other and then to those produced by nine British English speakers in an earlier study. The results showed that the pre-lenis vowels were clearly longer than the pre-fortis vowels, and there were no differences between Namibian and British English vowel durations in most of the tested words. The results offer new insights into the realization of vowel duration in pre-lenis and pre-fortis positions in Namibian English. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Validation of Serbian version of the LittlEARS® Early Speech Production Questionnaire for the assessment of early language development in typically developing children.
- Author
-
Nikolić, Mina, Zeljković, Sanja Ostojić, and Ivanović, Maja
- Subjects
- *
RESEARCH funding , *CRONBACH'S alpha , *DATA analysis , *RESEARCH methodology evaluation , *QUESTIONNAIRES , *CULTURE , *AUDIOMETRY , *DESCRIPTIVE statistics , *MANN Whitney U Test , *PSYCHOMETRICS , *RESEARCH methodology , *STATISTICS , *SPEECH perception , *CONFIDENCE intervals , *LANGUAGE acquisition , *CHILDREN ,RESEARCH evaluation - Abstract
Objective: The LittlEARS® Early Speech Production Questionnaire (LEESPQ) was developed to provide professionals with valuable information about children's earliest language development and has been successfully validated in several languages. This study aimed to validate the Serbian version of the LEESPQ in typically developing children and compare the results with validation studies in other languages. Methods: The English version of the LEESPQ was back‐translated into Serbian. Parents completed the questionnaire in paper or electronic form either during the visit to the paediatric clinic or through personal contact. A total of 206 completed questionnaires were collected. Standardized expected values were calculated using a second‐order polynomial model for children up to 18 months of age to create a norm curve for the Serbian language. The results were then used to determine confidence intervals, with the lower limit being the critical limit for typical speech‐language development. Finally, the results were compared with German and Canadian English developmental norms. Results: The Serbian LEESPQ version showed high homogeneity (r =.622) and internal consistency (α =.882), indicating that it almost exclusively measures speech production ability. No significant difference in total score was found between male and female infants (U = 4429.500, p =.090), so it can be considered a gender‐independent questionnaire. The results of the comparison between Serbian and German (U = 645.500, p =.673) and Serbian and English norm curves (U = 652.000, p =.725) show that the LEESPQ can be applied to different population groups, regardless of linguistic, cultural or sociological differences. Conclusion: The LEESPQ is a valid, age‐dependent and gender‐independent questionnaire suitable for assessing early speech development in children aged from birth to 18 months. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Decoding Single and Paired Phonemes Using 7T Functional MRI.
- Author
-
Vitória, Maria Araújo, Fernandes, Francisco Guerreiro, van den Boom, Max, Ramsey, Nick, and Raemaekers, Mathijs
- Abstract
Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Mapping subcortical brain lesions, behavioral and acoustic analysis for early assessment of subacute stroke patients with dysarthria
- Author
-
Juan Liu, Rukiye Ruzi, Chuyao Jian, Qiuyu Wang, Shuzhi Zhao, Manwa L. Ng, Shaofeng Zhao, Lan Wang, and Nan Yan
- Subjects
subacute stroke ,dysarthria ,speech production ,linguistic processing ,articulation ,basal ganglia ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
IntroductionDysarthria is a motor speech disorder frequently associated with subcortical damage. However, the precise roles of the subcortical nuclei, particularly the basal ganglia and thalamus, in the speech production process remain poorly understood.MethodsThe present study aimed to better understand their roles by mapping neuroimaging, behavioral, and speech data obtained from subacute stroke patients with subcortical lesions. Multivariate lesion-symptom mapping and voxel-based morphometry methods were employed to correlate lesions in the basal ganglia and thalamus with speech production, with emphases on linguistic processing and articulation.ResultsThe present findings revealed that the left thalamus and putamen are significantly correlated with concept preparation (r = 0.64, p < 0.01) and word retrieval (r = 0.56, p < 0.01). As the difficulty of the behavioral tasks increased, the influence of cognitive factors on early linguistic processing gradually intensified. The globus pallidus and caudate nucleus were found to significantly impact the movements of the larynx (r = 0.63, p < 0.01) and tongue (r = 0.59, p = 0.01). These insights underscore the complex and interconnected roles of the basal ganglia and thalamus in the intricate processes of speech production. The lateralization and hierarchical organization of each nucleus are crucial to their contributions to these speech functions.DiscussionThe present study provides a nuanced understanding of how lesions in the basal ganglia and thalamus impact various stages of speech production, thereby enhancing our understanding of the subcortical neuromechanisms underlying dysarthria. The findings could also contribute to the identification of multimodal assessment indicators, which could aid in the precise evaluation and personalized treatment of speech impairments.
- Published
- 2025
- Full Text
- View/download PDF
28. Frequency-specific cortico-subcortical interaction in continuous speaking and listening
- Author
-
Omid Abbasi, Nadine Steingräber, Nikos Chalas, Daniel S Kluger, and Joachim Gross
- Subjects
MEG ,speech production ,speech perception ,cerebellum ,brain connectivity ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Speech production and perception involve complex neural dynamics in the human brain. Using magnetoencephalography, our study explores the interaction between cortico-cortical and cortico-subcortical connectivities during these processes. Our connectivity findings during speaking revealed a significant connection from the right cerebellum to the left temporal areas in low frequencies, which displayed an opposite trend in high frequencies. Notably, high-frequency connectivity was absent during the listening condition. These findings underscore the vital roles of cortico-cortical and cortico-subcortical connections within the speech production and perception network. The results of our new study enhance our understanding of the complex dynamics of brain connectivity during speech processes, emphasizing the distinct frequency-based interactions between various brain regions.
- Published
- 2024
- Full Text
- View/download PDF
29. Formation of auditory and speech competences in learning English based on neural network technologies: psycholinguistic aspect
- Author
-
Leila Mirzoyeva, Zhanna Makhanova, Mona Kamal Ibrahim, and Zoya Snezhko
- Subjects
Artificial intelligence ,mobile learning ,natural language processing ,neurolinguistic programming ,speech production ,Introductory Linguistics ,Education (General) ,L7-991 - Abstract
The objective of this research is to investigate the effectiveness of integrating natural language processing (NLP) technologies into an English language learning program aimed at enhancing auditory and speaking competencies. The methodology of the research is grounded in the development and testing of the intervention effectiveness of neural network technologies apps Speechace and Rosetta Stone (based on advanced speech recognition and language modeling) for language learning within the educational process. A mixed-method approach combining statistical data analysis with qualitative surveys and testing was employed for data analysis. The results demonstrated the effectiveness of the course, assessed through pre- and post-test evaluations of students’ auditory and speaking skills, revealing significant improvements in both groups. The rate of score increase for Group 1 was calculated to be approximately 5.53%. In contrast, the rate of score increase for Group 2 was noticeably higher, at approximately 9.05%. The range of errors between the analytical value obtained using the traditional NLP algorithm and the actual value ranged from 0.008 to 0.012. This indicates that the algorithm’s predictions correspond to the exact values of cognitive processing factors with minimal error. These results hold practical implications, as program developers and educators can utilize them to substantiate their pedagogical practices and develop a more effective and engaging language learning experience. The novelty is that the study developed an English language learning model based on NLP technologies and the selected applications have not been analyzed before in the context of auditory and speech competencies.
- Published
- 2024
- Full Text
- View/download PDF
30. Investigating droplet emission during speech interaction: F. Carbone et al.
- Author
-
Carbone, Francesca, Bouchet, Gilles, Ghio, Alain, Legou, Thierry, André, Carine, Lalain, Muriel, Petrone, Caterina, and Giovanni, Antoine
- Published
- 2024
- Full Text
- View/download PDF
31. No differential effects of subthalamic nucleus vs. globus pallidus deep brain stimulation in Parkinson’s disease: Speech acoustic and perceptual findings
- Author
-
Frits van Brenk, Kaila L. Stipancic, Andrea H. Rohl, Daniel M. Corcos, Kris Tjaden, and Jeremy D.W. Greenlee
- Subjects
Speech production ,Dysarthria ,Surgery ,Intelligibility ,Speech severity UPDRS speech ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Background: Deep Brain Stimulation (DBS) in the Subthalamic Nucleus (STN) or the Globus Pallidus Interna (GPI) is well-established as a surgical technique for improving global motor function in patients with idiopathic Parkinson’s Disease (PD). Previous research has indicated speech deterioration in more than 30% of patients after STN-DBS implantation, whilst speech outcomes following GPI-DBS have received far less attention. Research comparing speech outcomes for patients with PD receiving STN-DBS and GPI-DBS can inform pre-surgical counseling and assist with clinician and patient decision-making when considering the neural targets selected for DBS-implantation. The aims of this pilot study were (1) to compare perceptual and acoustic speech outcomes for a group of patients with PD receiving bilateral DBS in the STN or the GPI with DBS stimulation both ON and OFF, and (2) examine associations between acoustic and perceptual speech measures and clinical characteristics. Methods: Ten individuals with PD receiving STN-DBS and eight individuals receiving GPI-DBS were audio-recorded reading a passage. Three listeners blinded to neural target and stimulation condition provided perceptual judgments of intelligibility and overall speech severity. Speech acoustic measures were obtained from the recordings. Acoustic and perceptual measures and clinical characteristics were compared for the two neural targets and stimulation conditions. Results: Intelligibility and speech severity were not significantly different across neural target or stimulation conditions. Generally, acoustic measures were also not statistically different for the two neural targets or stimulation conditions. Acoustic measures reflecting more varied speech prosody were associated with improved intelligibility and lessened severity. Convergent correlations were found between UPDRS-III speech scores and perceptual measures of intelligibility and severity. Conclusion: This study reports a systematic comparison of perceptual and acoustic speech outcomes following STN-DBS and GPI-DBS. Statistically significant differences in acoustic measures for the two neural targets were small in magnitude and did not yield group differences in perceptual measures. The absence of robust differences in speech outcomes for the two neural targets has implications for pre-surgical counseling. Results provide preliminary support for reliance on considerations other than speech when selecting the target for DBS in patients with PD.
- Published
- 2024
- Full Text
- View/download PDF
32. Acoustic Analyses of L1 and L2 Vowel Interactions in Mandarin–Cantonese Late Bilinguals
- Author
-
Yike Yang
- Subjects
acoustic analysis ,speech production ,vowel ,bilingualism ,Mandarin ,Cantonese ,Physics ,QC1-999 - Abstract
While the focus of bilingual research is frequently on simultaneous or early bilingualism, the interactions between late bilinguals’ first language (L1) and second language (L2) have rarely been studied previously. To fill this research gap, the aim of the current study was to investigate the production of vowels in the L1 Mandarin and L2 Cantonese of Mandarin–Cantonese late bilinguals in Hong Kong. A production experiment was conducted with 22 Mandarin–Cantonese bilinguals, as well as with 20 native Mandarin speakers and 21 native Cantonese speakers. Acoustic analyses, including formants of and Euclidean distances between the vowels, were performed. Both vowel category assimilation and dissimilation were noted in the Mandarin–Cantonese bilinguals’ L1 and L2 vowel systems, suggesting interactions between the bilinguals’ L1 and L2 vowel categories. In general, the findings are in line with the hypotheses of the Speech Learning Model and its revised version, which state that L1–L2 phonetic interactions are inevitable, as there is a common phonetic space for storing the L1 and L2 phonetic categories, and that learners always have the ability to adapt their phonetic space. Future studies should refine the data elicitation method, increase the sample size and include more language pairs to better understand L1 and L2 phonetic interactions.
- Published
- 2024
- Full Text
- View/download PDF
33. Dataset of speech produced with delayed auditory feedbackOpen Science FrameworkOpen Science Framework
- Author
-
Matthias Heyne, Monique C. Tardif, Alexander Ocampo, Ashley P. Petitjean, Emily J. Hacker, Caroline N. Fox, Megan A. Liu, Madeline Fontana, Vincent Pennetti, and Jason W. Bohland
- Subjects
Speech production ,Speech motor control ,Acoustic ,Phonetics ,Speech errors ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
Speakers use auditory feedback to monitor their speech output and detect any deviations from their expectations. It has long been known that when auditory feedback is artificially delayed by a fraction of a second, speech may be severely disrupted [1–3]. Despite the long history of using delayed auditory feedback (DAF) in experimental research on speech motor control, its effects remain relatively poorly understood. To our knowledge, there are currently no publicly available research datasets containing recordings of speech produced with DAF. Here we describe a large dataset of speech produced with DAF using modern experimental methods with systematic controls and varied speaking materials, including phonotactically legal, nonword syllable sequences and American English sentences. Auditory feedback latencies were tightly controlled and included a zero / minimal delay (∼12 ms), 150 ms, 200 ms, and 250 ms. The dataset includes simultaneous audio recordings from the microphone (production) and headphone (feedback) channels. It also includes recordings and annotations of reading passages and multiple other demographic and acoustic measures that serve as covariates of interest from each participant. The complete dataset, which is made available in two segments (one fully open access and one password restricted) includes speech audio recordings from 55 participants, 42 of whom completed a second session with similar testing materials. This dataset is valuable for researchers interested in theoretical aspects of speech sensory-motor control and for researchers interested in developing speech analysis tools.
- Published
- 2025
- Full Text
- View/download PDF
34. Spared speech fluency is associated with increased functional connectivity in the speech production network in semantic variant primary progressive aphasia
- Author
-
Montembeault, Maxime, Miller, Zachary A, Geraudie, Amandine, Pressman, Peter, Slegers, Antoine, Millanski, Carly, Licata, Abigail, Ratnasiri, Buddhika, Mandelli, Maria Luisa, Henry, Maya, Cobigo, Yann, Rosen, Howard J, Miller, Bruce L, Brambati, Simona M, Gorno-Tempini, Maria Luisa, and Battistella, Giovanni
- Subjects
Biological Psychology ,Psychology ,Brain Disorders ,Acquired Cognitive Impairment ,Alzheimer's Disease Related Dementias (ADRD) ,Alzheimer's Disease including Alzheimer's Disease Related Dementias (AD/ADRD) ,Clinical Research ,Rare Diseases ,Neurosciences ,Behavioral and Social Science ,Neurodegenerative ,Aging ,Dementia ,Rehabilitation ,Aphasia ,Frontotemporal Dementia (FTD) ,2.1 Biological and endogenous factors ,Neurological ,semantic variant of primary progressive aphasia ,speech production ,semantics ,functional connectivity ,compensation mechanism ,Clinical sciences ,Biological psychology - Abstract
Semantic variant primary progressive aphasia is a clinical syndrome characterized by marked semantic deficits, anterior temporal lobe atrophy and reduced connectivity within a distributed set of regions belonging to the functional network associated with semantic processing. However, to fully depict the clinical signature of semantic variant primary progressive aphasia, it is necessary to also characterize preserved neural networks and linguistic abilities, such as those subserving speech production. In this case-control observational study, we employed whole-brain seed-based connectivity on task-free MRI data of 32 semantic variant primary progressive aphasia patients and 46 healthy controls to investigate the functional connectivity of the speech production network and its relationship with the underlying grey matter. We investigated brain-behaviour correlations with speech fluency measures collected through clinical tests (verbal agility) and connected speech (speech rate and articulation rate). As a control network, we also investigated functional connectivity within the affected semantic network. Patients presented with increased connectivity in the speech production network between left inferior frontal and supramarginal regions, independent of underlying grey matter volume. In semantic variant primary progressive aphasia patients, preserved (verbal agility) and increased (articulation rate) speech fluency measures correlated with increased connectivity between inferior frontal and supramarginal regions. As expected, patients demonstrated decreased functional connectivity in the semantic network (dependent on the underlying grey matter atrophy) associated with average nouns' age of acquisition during connected speech. Collectively, these results provide a compelling model for studying compensation mechanisms in response to disease that might inform the design of future rehabilitation strategies in semantic variant primary progressive aphasia.
- Published
- 2023
35. The characteristics and reproducibility of motor speech functional neuroimaging in healthy controls.
- Author
-
Kenyon, Katherine H., Boonstra, Frederique, Noffs, Gustavo, Morgan, Angela T., Vogel, Adam P., Kolbe, Scott, and Van Der Walt, Anneke
- Subjects
FUNCTIONAL magnetic resonance imaging ,SPEECH ,NEUROLINGUISTICS ,CEREBELLUM ,BRAIN imaging ,SENSORIMOTOR cortex - Abstract
Introduction: Functional magnetic resonance imaging (fMRI) can improve our understanding of neural processes subserving motor speech function. Yet its reproducibility remains unclear. This study aimed to evaluate the reproducibility of fMRI using a word repetition task across two time points. Methods: Imaging data from 14 healthy controls were analysed using a multilevel general linear model. Results: Significant activation was observed during the task in the right hemispheric cerebellar lobules IV-V, right putamen, and bilateral sensorimotor cortices. Activation between timepoints was found to be moderately reproducible across time in the cerebellum but not in other brain regions. Discussion: Preliminary findings highlight the involvement of the cerebellum and connected cerebral regions during a motor speech task. More work is needed to determine the degree of reproducibility of speech fMRI before this could be used as a reliable marker of changes in brain activity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Producing a smaller sound system: Acoustics and articulation of the subset scenario in Gaelic–English bilinguals.
- Subjects
- *
ENGLISH language , *ULTRASONIC imaging , *SPEECH , *SOUND systems , *ACOUSTICS - Abstract
When a bilingual speaker has a larger linguistic sub-system in their L1 than their L2, how are L1 categories mapped to the smaller set of L2 categories? This article investigates this "subset scenario" (Escudero, 2005) through an analysis of laterals in highly proficient bilinguals (Scottish Gaelic L1, English L2). Gaelic has three lateral phonemes and English has one. We examine acoustics and articulation (using ultrasound tongue imaging) of lateral production in speakers' two languages. Our results suggest that speakers do not copy a relevant Gaelic lateral into their English, instead maintaining language-specific strategies, with speakers also producing English laterals with positional allophony. These results show that speakers develop a separate production strategy for their L2. Our results advance models such as the L2LP which has mainly considered perception data, and also contribute articulatory data to this area of study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Development of a Screen Keyboard System with Radially‐Arranged Keys and its Effectiveness for Typing Including Eye‐Gaze Input.
- Author
-
Ogata, Kohichi and Nozute, Shigeyoshi
- Subjects
- *
GAZE , *MICE (Computers) , *KEYBOARDING , *ARTICULATION (Speech) , *COMPUTER performance , *EYE , *EUCLIDEAN distance - Abstract
This study aims to develop an effective screen‐keyboard system from a new perspective of key arrangements. In the proposed screen keyboard system, keys are centralized by arranging them radially, with alphabetical keys arranged based on the articulation for speech production. A gesture input method and click input method are implemented considering the mora‐based Japanese sound system with syllables containing a consonant‐vowel (CV) structure. In a preliminary experiment, the cumulative pixel distances required to press keys for the input sequences were evaluated as a simulation to confirm the basic performance of the keyboard system. The Euclidean pixel distances between the keys on the keyboard images were accumulated and compared between the proposed keyboard system and a typical QWERTY keyboard. Typing experiments involving participants were performed to confirm the effectiveness of the proposed keyboard system. The experimental results reveal that the proposed screen keyboard system enables users to generate efficient typing performances using a computer mouse and eye gaze via short‐term training. © 2024 Institute of Electrical Engineers of Japan and Wiley Periodicals LLC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Enhancing speech perception in noise through articulation.
- Author
-
Perron, Maxime, Liu, Qiying, Tremblay, Pascale, and Alain, Claude
- Subjects
- *
SPEECH perception , *OLDER people , *SPEECH , *NOISE , *LEGAL judgments - Abstract
Considerable debate exists about the interplay between auditory and motor speech systems. Some argue for common neural mechanisms, whereas others assert that there are few shared resources. In four experiments, we tested the hypothesis that priming the speech motor system by repeating syllable pairs aloud improves subsequent syllable discrimination in noise compared with a priming discrimination task involving same–different judgments via button presses. Our results consistently showed that participants who engaged in syllable repetition performed better in syllable discrimination in noise than those who engaged in the priming discrimination task. This gain in accuracy was observed for primed and new syllable pairs, highlighting increased sensitivity to phonological details. The benefits were comparable whether the priming tasks involved auditory or visual presentation. Inserting a 1‐h delay between the priming tasks and the syllable‐in‐noise task, the benefits persisted but were confined to primed syllable pairs. Finally, we demonstrated the effectiveness of this approach in older adults. Our findings substantiate the existence of a speech production–perception relationship. They also have clinical relevance as they raise the possibility of production‐based interventions to improve speech perception ability. This would be particularly relevant for older adults who often encounter difficulties in perceiving speech in noise. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. How Much Voicing in Voiced Geminates? The Laryngeal Voicing Profile of Polish Double Stops.
- Author
-
ROJCZYK, Arkadiusz and PORZUCZEK, Andrzej
- Subjects
- *
NATIVE language , *SPEECH , *CONSONANTS , *HUMAN voice , *LANGUAGE & languages - Abstract
Geminates (such as the double /k/ in Polish lekki “light”) form a group of consonants that are mainly characterized by longer durations than the corresponding singletons. Most of the research has concentrated on durational and spectral properties of geminates in contrast to singletons. Much less attention has been paid to the realization of the voicing contrast in geminates and whether it is differently implemented than in singletons. In the current study, we contribute to this research with the data from Polish stop geminates. To this end, a total of 49 native speakers of Polish produced all stop geminates and corresponding singletons in wordforms of the same phonological make-up. The measurements included closure duration, voicing ratio, duration, and mean intensity of the release burst. The results showed that the voicing ratio was 0.69, classifying Polish stop geminates as mildly devoiced. There was a significant speaker-dependent variability in that some speakers devoiced all geminates, while others either partially devoiced or never devoiced. The analysis of interactions between geminates and singletons revealed that geminates cancelled voicing cues observed in singletons such as longer durations and lower intensity of the release burst. We discuss the current results in terms of voicing implementation in Polish and in relation to other geminating languages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Acoustic Analysis of Fricatives /s/ and /ʃ/ and Affricate /ʧ/ in Persian-Speaking Cochlear-Implanted Children and Normal-Hearing Peers.
- Author
-
Roohparvar, Rahimeh, Karimabadi, Mahin, Ghahari, Shima, and Mirzaee, Mogaddameh
- Subjects
- *
COCHLEAR implants , *LANGUAGE & languages , *SPEECH , *HEALTH status indicators , *STATISTICAL sampling , *DESCRIPTIVE statistics , *MANN Whitney U Test , *LINGUISTICS , *SCHOOL children , *HEARING , *PHONETICS , *PSYCHOLOGY of parents , *FRICTION , *HUMAN voice , *DATA analysis software , *WAVE analysis , *NONPARAMETRIC statistics , *CHILDREN ,PHYSIOLOGICAL aspects of speech - Abstract
Background and Aim: Hearing-impaired individuals have difficulty comprehending and producing speech sounds. Cochlear implantation is used to augment hearing. The present study aims to compare the production of fricatives /s/ and /ʃ/ and affricate /ʧ/ by Persianspeaking Cochlear-Implanted (CI) and Normal-Hearing (NH) children Methods: Fifteen Persian-speaking NH children and 15 Persian-speaking CI children, matched for age, gender, and general health conditions, were included in the study. The stimuli included two voiceless Persian fricatives /s/ and /ʃ/ and one voiceless Persian affricate /ʧ/ along with the open front vowel /æ/ in three Consonant-Vowel (CV), Consonant-VowelConsonant (CVC), and Vowel-Consonant (VC) contexts (/sæ/, /æsæ/, /æs/, /ʃæ/, /æʃæ/, /æʃ/, /ʧæ/, /æʧæ/, /æʧ/). After recording all utterances, Praat software was used to measure the friction duration, rise time, and spectral peak of the consonants Results: The CI children could not distinguish between /ʃ/ and /ʧ/ and produced affricate /ʧ/ as an allophone of /ʃ/ (p=0.01). Moreover, distinguishing between two fricatives /s/ and /ʃ/ was difficult for both groups. While NH children slightly treated these two sounds differently, the CI group produced fricative /s/ as an allophone of /ʃ/ (p=0.02). The rise time of /ʃ/ was longer in the NH children, except for /ʧæ/, where the CI children had a longer rise time. Conclusion: The speech of CI children is different in producing /s/, /ʃ/, and /ʧ/ from their NH peers. The results can help speech therapists, clinical linguists, and application designers focus on speech sounds that are challenging for CI children to produce. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Stop Consonant Production in Children with Cleft Palate After Palatoplasty.
- Author
-
Hardin-Jones, Mary, Chapman, Kathy L., Heimbaugh, Libby, Dahill, Ann E., Cummings, Caitlin, Baylis, Adriane, and Hatch Pollard, Sarah
- Subjects
SPEECH ,RESEARCH funding ,CONSONANTS ,DESCRIPTIVE statistics ,PHYSIOLOGICAL aspects of speech ,LONGITUDINAL method ,SOUND recordings ,RESEARCH ,CLEFT lip ,COMPARATIVE studies ,CLEFT palate ,LANGUAGE acquisition - Abstract
Objective: : The current study examined stop consonant production in children with cleft lip and/or palate (CP ± L) 2-6 months following palatal surgery. Design: : Prospective comparative study. Setting: : Multisite institutional. Participants: : Participants included 113 children with repaired CP ± L (mean age = 16 months) who were participating in the multicenter CORNET study. Procedures: Parents of participants were asked to record approximately two hours of their child's vocalizations/words at home using a Language ENvironmental Analysis (LENA
TM ) recorder. Four ten-minute audio-recorded samples of vocalizations were extracted from the original recording for each participant and analyzed for presence of oral stop consonants. A minimum of 100 vocalizations were required for analysis. Results: : Preliminary findings indicate that at least one oral stop was evident in the consonant inventory for 95 of the 113 children (84%) at the time of their post-surgery 16-month recording, and 80 of these children (71%) were producing two or more different stops. Approximately 50% of the children (57/113) produced the three voiced stops, and eight of the children (7%) were producing all six stop consonants. Conclusions: : The findings of this study suggest that the majority of children with repaired CP ± L from English-speaking homes are producing oral stops within six months following palatal surgery. Similar to same-age children without CL ± P, voiced stops were more frequently evident in the children's inventories than voiceless stops. In contrast to findings of previous reports suggesting place of articulation differences, a somewhat comparable percentage of children in this study produced voiced bilabial, alveolar, and velar stops. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
42. A NOTE ON THE LANGUAGE COMPONENTS OF APHASIA.
- Author
-
Chompalov, Kostadin and Georgieva, Dobrinka
- Subjects
SPEECH therapists ,COMMUNICATIVE disorders ,SPEECH ,APHASIA ,LANGUAGE & languages - Abstract
This study investigated the specific language deficits observed in persons with aphasia through a theoretical analysis of some of the classic and more recent literature in the field. The study used a systematic search and subsequent analysis of publications related to the topic retrieved from well-known electronic databases: PubMed, PsycINFO and Web of Science. The results suggest a nuanced interaction between language components and speech production in persons with aphasia. The theoretical analysis contributes to the understanding of aphasia by highlighting the sophisticated interplay between language processes and speech in individuals affected by this neurologically based communication disorder. It draws the attention of the speech and language pathologist to the need for evidence-based diagnostic and therapeutic interventions tailored to the language profiles of persons with aphasia [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Phonological neighbourhood effects in Italian speech production: Evidence from healthy and neurologically impaired populations.
- Author
-
Bellin, Irene, Iob, Erica, De Pellegrin, Serena, and Navarrete, Eduardo
- Subjects
- *
SPEECH , *NEIGHBORHOODS , *ITALIAN language , *LEXICAL access - Abstract
In two speech production studies conducted in Italian, we investigated the impact of phonological neighbourhood properties such as the neighbourhood density and the mean frequency of the neighbours on speech processing. Two populations of healthy (Study 1) and neurologically impaired (Study 2) individuals were tested. We employed multi-regression methods to analyse naming latencies in Study 1 and accuracy rates in Study 2 while controlling for various psycholinguistic predictors. In Study 1, pictures with words from high-density neighbourhoods were named faster than those from low-density neighbourhoods. Additionally, words with high-frequency neighbours were named faster in Study 1 and yielded higher accuracy rates in Study 2. The results suggest facilitatory effects of both the phonological neighbourhood density and frequency neighbourhood variables. Furthermore, we observed interactions between these two phonological neighbourhood variables and name agreement and repetition. Specifically, the facilitation effect was more pronounced for pictures with lower name agreement and during the initial presentation of the pictures. These findings are discussed in the context of previous literature and within the framework of interactive models of speech production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. The Effect of Auditory Perceptual Training by Online Computer Software on English Pronunciation †.
- Author
-
Wang, Ching-Wen
- Subjects
SPEECH perception ,ONLINE education ,ENGLISH language ,EDUCATIONAL outcomes ,COMPUTER software - Abstract
Pronunciation is crucial to L2 learning. However, achieving speech proficiency is difficult. Class time constraints make demonstration–imitation pronunciation teaching methods less effective, even after repeated practice. Research suggests that pronunciation involves motor control, that auditory preparation enhances accuracy, and that learners produce more accurate pronunciation after perceiving accurate target sounds. This study proposes that the perception of accurate L2 target sounds will enhance pronunciation. To test this concept, the study employed online auditory training tasks for English learners enrolled at a private university in Taiwan. The results showed that auditory teaching results in positive learning outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Word frequency and cognitive effort in turns-at-talk: turn structure affects processing load in natural conversation.
- Author
-
Rühlemann, Christoph and Barthel, Mathias
- Subjects
WORD frequency ,DISTRIBUTION (Probability theory) ,SPEECH ,SOCIAL action ,KNOWLEDGE transfer ,LANGUAGE transfer (Language learning) ,FREEDOM of speech - Abstract
Frequency distributions are known to widely affect psycholinguistic processes. The effects of word frequency in turns-at-talk, the nucleus of social action in conversation, have, by contrast, been largely neglected. This study probes into this gap by applying corpus-linguistic methods on the conversational component of the British National Corpus (BNC) and the Freiburg Multimodal Interaction Corpus (FreMIC). The latter includes continuous pupil size measures of participants of the recorded conversations, allowing for a systematic investigation of patterns in the contained speech and language on the one hand and their relation to concurrent processing costs they may incur in speakers and recipients on the other hand. We test a first hypothesis in this vein, analyzing whether word frequency distributions within turns-at-talk are correlated with interlocutors' processing effort during the production and reception of these turns. Turns are found to generally show a regular distribution pattern of word frequency, with highly frequent words in turn-initial positions, mid-range frequency words in turn-medial positions, and low-frequency words in turn-final positions. Speakers' pupil size is found to tend to increase during the course of a turn at talk, reaching a climax toward the turn end. Notably, the observed decrease in word frequency within turns is inversely correlated with the observed increase in pupil size in speakers, but not in recipients, with steeper decreases in word frequency going along with steeper increases in pupil size in speakers. We discuss the implications of these findings for theories of speech processing, turn structure, and information packaging. Crucially, we propose that the intensification of processing effort in speakers during a turn at talk is owed to an informational climax, which entails a progression from high-frequency, low-information words through intermediate levels to low-frequency, high-information words. At least in English conversation, interlocutors seem to make use of this pattern as one way to achieve efficiency in conversational interaction, creating a regularly recurring distribution of processing load across speaking turns, which aids smooth turn transitions, content prediction, and effective information transfer. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Transfer of statistical learning from passive speech perception to speech production.
- Author
-
Murphy, Timothy K., Nozari, Nazbanou, and Holt, Lori L.
- Subjects
- *
STATISTICAL learning , *SPEECH perception , *TRANSFER of training , *SPEECH , *AMERICAN English language , *PROTOCOL analysis (Cognition) - Abstract
Communicating with a speaker with a different accent can affect one's own speech. Despite the strength of evidence for perception-production transfer in speech, the nature of transfer has remained elusive, with variable results regarding the acoustic properties that transfer between speakers and the characteristics of the speakers who exhibit transfer. The current study investigates perception-production transfer through the lens of statistical learning across passive exposure to speech. Participants experienced a short sequence of acoustically variable minimal pair (beer/pier) utterances conveying either an accent or typical American English acoustics, categorized a perceptually ambiguous test stimulus, and then repeated the test stimulus aloud. In the canonical condition, /b/–/p/ fundamental frequency (F0) and voice onset time (VOT) covaried according to typical English patterns. In the reverse condition, the F0xVOT relationship reversed to create an "accent" with speech input regularities atypical of American English. Replicating prior studies, F0 played less of a role in perceptual speech categorization in reverse compared with canonical statistical contexts. Critically, this down-weighting transferred to production, with systematic down-weighting of F0 in listeners' own speech productions in reverse compared with canonical contexts that was robust across male and female participants. Thus, the mapping of acoustics to speech categories is rapidly adjusted by short-term statistical learning across passive listening and these adjustments transfer to influence listeners' own speech productions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Phonetic and Lexical Crosslinguistic Influence in Early Spanish–Basque–English Trilinguals.
- Author
-
Stoehr, Antje, Jevtović, Mina, de Bruin, Angela, and Martin, Clara D.
- Subjects
- *
MULTILINGUALISM , *LANGUAGE & languages , *LINGUISTICS , *SPANISH language , *LEXICAL phonology - Abstract
A central question in multilingualism research is how multiple languages interact. Most studies have focused on first (L1) and second language (L2) effects on a third language (L3), but a small number of studies dedicated to the opposite transfer direction have suggested stronger L3 influence on L2 than on L1 in postpuberty learners. In our study, we provide further support for stronger L3-to-L2 than L3-to-L1 influence and show that it extends to (a) phonetics and the lexicon and (b) childhood learners. Fifty Spanish–Basque–English trilingual adults who had acquired Spanish from birth and Basque between 2 to 4 years of age through immersion participated in a speeded trilingual switching task measuring production of voice onset time and lexical intrusions. Participants experienced more phonetic and lexical crosslinguistic influence from L3 English during L2-Basque production than during L1-Spanish production. These findings show that even highly proficient early bilinguals experience differential influence from a classroom-taught L3 to L1 and to L2. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Discrepancy in prosodic disambiguation strategies between Chinese EFL learners and native English speakers.
- Author
-
Xue, Liya and Yue, Ming
- Subjects
- *
PROSODIC analysis (Linguistics) , *NATIVE language , *ENGLISH as a foreign language , *CHINESE students , *FRAMES (Linguistics) - Abstract
Previous studies on prosodic disambiguation have found Chinese EFL learners capable of using prosodic cues for both boundary marking and focus encoding in English, but somewhat differently from native English speakers. No clear understanding has yet been obtained about their overall use of prosodic strategies in speech production for disambiguation. In this study, we conducted a contextualized production task followed by perception judgments and acoustic analyses to investigate their prosodic disambiguation, with native English speakers as the contrast group. We considered three types of prosodic cues (duration, pitch, and intensity), and examined ambiguities in both syntactic structure and information structure. We found that Chinese EFL learners did alter their prosodic cues to disambiguate two readings, but differently from native English speakers in both cue number and cue combination. Specifically, they used a narrower range of cues and provided insufficient prosodic information, consequently leading to poor perception by native listeners. Our findings argue for prosodic disambiguation training in foreign language teaching. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Language Contact Within the Speaker: Phonetic Variation and Crosslinguistic Influence.
- Author
-
Johnson, Khia A. and Babel, Molly
- Subjects
- *
COMMUNICATIVE competence , *RESEARCH funding , *PHONOLOGICAL awareness , *MULTILINGUALISM , *LINGUISTICS , *SPEECH evaluation , *PHONETICS , *ENGLISH language , *HUMAN voice , *INTER-observer reliability - Abstract
A recent model of sound change posits that the direction of change is determined, at least in part, by the distribution of variation within speech communities. We explore this model in the context of bilingual speech, asking whether the less variable language constrains phonetic variation in the more variable language, using a corpus of spontaneous speech from early Cantonese–English bilinguals. As predicted, given the phonetic distributions of stop obstruents in Cantonese compared with English, intervocalic English /b d g/ were produced with less voicing for Cantonese–English bilinguals and word-final English /t k/ were more likely to be unreleased compared with spontaneous speech from two monolingual English control corpora. Whereas voicing initial obstruents can be gradient in Cantonese, the release of final obstruents is prohibited. Neither Cantonese–English bilingual initial voicing nor word-final stop release patterns were significantly impacted by language mode. These results provide evidence that the phonetic variation in crosslinguistically linked categories in bilingual speech is shaped by the distribution of phonetic variation within each language, thus suggesting a mechanistic account for why some segments are more susceptible to cross-language influence than others. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Repeat Buccal Flaps Successfully Reduce Hypernasality in a Patient with Cleft Palate.
- Author
-
Green, Jackson, Lignieres, Austin, Obinero, Chioma G., Nguyen, Phuong D., and Greives, Matthew R.
- Subjects
VOICE disorder treatment ,VOICE disorders ,SOFT palate ,SURGICAL flaps ,SUTURING ,PLASTIC surgery ,CLEFT palate ,VELOPHARYNGEAL insufficiency ,SPEECH therapy - Abstract
Surgical intervention can contribute to the development of velopharyngeal insufficiency (VPI) leading to hypernasality and regurgitation. In this case, a patient with a history of bilateral buccal flaps used for her primary CP repair presented to clinic with hypernasality and VPI as assessed by speech exam and imaging. She underwent repeat bilateral buccal flap palatal lengthening with division of the pedicles 3 months later. Three months after her division, her hypernasality score improved from moderate to mild and her posterior gap decreased. This study concluded buccal flaps can be used a second time for patients needing palatal revisions for VPI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.