912 results on '"speech comprehension"'
Search Results
2. Generalized Encoding of the Relative Subjective Value of Cognitive Effort in the Dorsal ACC.
- Author
-
Crawford, Jennifer L., Brough, Rachel E., Eisenstein, Sarah A., Peelle, Jonathan E., and Braver, Todd S.
- Subjects
- *
REWARD (Psychology) , *COGNITION , *CINGULATE cortex , *SHORT-term memory , *SPEECH - Abstract
Making choices about whether and when to engage cognitive effort are a common feature of everyday experience, with important consequences for academic, career, and health outcomes. Yet, despite their hypothesized importance, very little is understood about the underlying mechanisms that support this form of human cost–benefit decision-making. To investigate these mechanisms, we used the Cognitive Effort Discounting Paradigm (Cog-ED) during fMRI scanning to precisely quantify the neural encoding of varying cognitive effort demands relative to reward outcomes, within two distinct cognitive domains (working memory, speech comprehension). The findings provide strong evidence that the dorsal anterior cingulate cortex (dACC) plays a central and selective role in this decisionmaking process. Trial-by-trial modulations in dACC activation tracked the relative subjective value of the low-effort, low-reward option, with the strongest activity occurring when this was of greater value than the high-effort, high-reward option. In contrast, dACC activity was not modulated by decision difficulty, though such effects were found in other frontoparietal regions. Critically, dACC activity was also strongly correlated across the two decision-making task domains and further predicted subsequent choice behavior in both. Together, the results suggest that dACC activity modulation reflects a domain-general valuation comparison mechanism, which acts to bias participants away from decisions to engage in cognitive effort, when the perceived subjective costs of such engagement outweigh the reward-related benefits. These findings complement work in other cost domains and species by pointing to a clear role of the dACC in representing subjective value differences between choice options during cost–benefit decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Vorhersage des postoperativen Sprachverstehens mit dem transkutanen teilimplantierbaren Knochenleitungshörsystem Osia®.
- Author
-
Arndt, Susan, Wesarg, Thomas, Aschendorff, Antje, Speck, Iva, Hocke, Thomas, Jakob, Till Fabian, and Rauch, Ann-Kathrin
- Abstract
Copyright of HNO is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
4. The flexibility and representational nature of phonological prediction in listening comprehension: evidence from the visual world paradigm.
- Author
-
Zhao, Zitong, Ding, Jinfeng, Wang, Jiayu, Chen, Yiya, and Li, Xiaoqing
- Subjects
TONE (Phonetics) ,LISTENING comprehension ,WORD recognition ,NATIVE language ,SPEECH ,MANDARIN dialects ,JUDGMENT (Psychology) - Abstract
Using the visual world paradigm with printed words, this study investigated the flexibility and representational nature of phonological prediction in real-time speech processing. Native speakers of Mandarin Chinese listened to spoken sentences containing highly predictable target words and viewed a visual array with a critical word and a distractor word on the screen. The critical word was manipulated in four ways: a highly predictable target word, a homophone competitor, a tonal competitor, or an unrelated word. Participants showed a preference for fixating on the homophone competitors before hearing the highly predictable target word. The predicted phonological information waned shortly but was re-activated later around the acoustic onset of the target word. Importantly, this homophone bias was observed only when participants were completing a 'pronunciation judgement' task, but not when they were completing a 'word judgement' task. No effect was found for the tonal competitors. The task modulation effect, combined with the temporal pattern of phonological pre-activation, indicates that phonological prediction can be flexibly generated by top-down mechanisms. The lack of tonal competitor effect suggests that phonological features such as lexical tone are not independently predicted for anticipatory speech processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. The rational roles of experiences of utterance meanings.
- Author
-
Brogaard, Berit
- Abstract
The perennial question of the nature of natural-language understanding has received renewed attention in recent years. Two kinds of natural-language understanding, in particular, have captivated the interest of philosophers: linguistic understanding and utterance understanding. While the literature is rife with discussions of linguistic understanding and utterance understanding, the question of how the two types of understanding explanatorily depend on each other has received relatively scant attention. Exceptions include the linguistic ability/know-how views of linguistic understanding proposed by Dean Pettit and Brendan Balcerak Jackson. On these views, to tacitly linguistically understand a sentence just is to possess the linguistic ability/knowledge-how needed to derive/infer what is said by different utterances of the sentence. Despite their focus on linguistic understanding, both views can straightforwardly explain utterance understanding as the output of a derivation/inference from a representation of the sentence uttered. Here, I take issue with these approaches to utterance understanding and then develop an alternative. More specifically, I distinguish two kinds of utterance understanding, experiential and doxastic, and then argue that experiences of what is said by utterances play distinct rational roles in the two kinds of utterance understanding. I conclude by addressing a recent challenge to my proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. How does the human brain process noisy speech in real life? Insights from the second-person neuroscience perspective.
- Author
-
Li, Zhuoran and Zhang, Dan
- Abstract
Comprehending speech with the existence of background noise is of great importance for human life. In the past decades, a large number of psychological, cognitive and neuroscientific research has explored the neurocognitive mechanisms of speech-in-noise comprehension. However, as limited by the low ecological validity of the speech stimuli and the experimental paradigm, as well as the inadequate attention on the high-order linguistic and extralinguistic processes, there remains much unknown about how the brain processes noisy speech in real-life scenarios. A recently emerging approach, i.e., the second-person neuroscience approach, provides a novel conceptual framework. It measures both of the speaker's and the listener's neural activities, and estimates the speaker-listener neural coupling with regarding of the speaker's production-related neural activity as a standardized reference. The second-person approach not only promotes the use of naturalistic speech but also allows for free communication between speaker and listener as in a close-to-life context. In this review, we first briefly review the previous discoveries about how the brain processes speech in noise; then, we introduce the principles and advantages of the second-person neuroscience approach and discuss its implications to unravel the linguistic and extralinguistic processes during speech-in-noise comprehension; finally, we conclude by proposing some critical issues and calls for more research interests in the second-person approach, which would further extend the present knowledge about how people comprehend speech in noise. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Effects of linguistic context and noise type on speech comprehension.
- Author
-
Fitzgerald, Laura P., DeDe, Gayle, and Jing Shen
- Subjects
LINGUISTIC context ,SPEECH ,LINGUISTIC complexity ,SPEECH perception ,PUPILLARY reflex ,NOISE ,INTELLIGIBILITY of speech - Abstract
Introduction: Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods: We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results: We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion: These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Children benefit from gestures to understand degraded speech but to a lesser extent than adults.
- Author
-
Sekine, Kazuki and Özyürek, Aslı
- Subjects
SPEECH ,CHILDREN'S language ,GESTURE ,ADULTS ,VIDEO excerpts - Abstract
The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children's multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Prediction of postoperative speech comprehension with the transcutaneous partially implantable bone conduction hearing system Osia®.
- Author
-
Arndt, Susan, Wesarg, Thomas, Aschendorff, Antje, Speck, Iva, Hocke, Thomas, Jakob, Till Fabian, and Rauch, Ann-Kathrin
- Abstract
Background: The active transcutaneous, partially implantable osseointegrated bone conduction system Cochlear™ Osia® (Cochlear, Sydney, Australia) has been approved for use in German-speaking countries since April 2021. The Osia is indicated for patients either having conductive (CHL) or mixed hearing loss (MHL) with an average bone conduction (BC) hearing loss of 55 dB HL or less, or having single-sided deafness (SSD). Objectives: The aim of this retrospective study was to investigate the prediction of postoperative speech recognition with Osia® and to evaluate the speech recognition of patients with MHL and in particular an aided dynamic range of less than 30 dB with Osia®. Materials and methods: Between 2017 and 2022, 29 adult patients were fitted with the Osia®, 10 patients (11 ears) with CHL and 19 patients (25 ears) with MHL. MHL was subdivided into two groups: MHL‑I with four-frequency pure-tone average in BC (BC-4PTA) ≥ 20 dB HL and < 40 dB HL (n = 15 patients; 20 ears) vs. MHL-II with BC-4PTA ≥ 40 dB HL (n = 4 patients; 5 ears). All patients tested a bone conduction hearing device on a softband preoperatively. Speech intelligibility in quiet was assessed preoperatively using the Freiburg monosyllabic test in unaided condition, with the trial BCHD preoperatively and with Osia® postoperatively with Osia®. The maximum word recognition score (mWRS) unaided and the word recognition score (WRS) with the test system at 65 dB SPL were correlated with the postoperative WRS with Osia® at 65 dB SPL. Results: Preoperative prediction of postoperative outcome with Osia® was better using the mWRS than by the WRS at 65 dB SPL with the test device on the softband. Postoperative WRS was most predictive for patients with CHL and less predictable for patients with mixed hearing loss with BC-4PTA ≥ 40 dB HL. For the test device on a softband, the achievable outcome tended to a minimum, with the mWRS tending to predict the realistically achievable outcome. Conclusion: Osia® can be used for the treatment of CHL and MHL within the indication limits. The average preoperative bone conduction hearing threshold also provides an approximate estimate of the postoperative WRS with Osia®, for which the most accurate prediction is obtained using the preoperative mWRS. Prediction accuracy decreases from a BC-4PTA of ≥ 40 dB HL. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Time course of effective connectivity associated with perspective taking in utterance comprehension.
- Author
-
Shingo Tokimoto and Naoko Tokimoto
- Subjects
PERSPECTIVE taking ,EXECUTIVE function ,CONTROL (Psychology) ,MIRROR neurons ,ORAL communication ,JAPANESE language ,SPATIO-temporal variation ,LARVAL dispersal - Abstract
This study discusses the effective connectivity in the brain and its time course in realizing perspective taking in verbal communication through electroencephalogram (EEG) associated with the understanding of Japanese utterances. We manipulated perspective taking in a sentence with the Japanese subsidiary verbs -ageru and -kureru, which mean "to give". We measured the EEG during the auditory présentation of the sentences with a multichannel electroencephalograph, and the partial directed coherence and its temporal variations were analyzed using the source localization method to examine causal interactions between nineteen régions of interest in the brain. Three différent processing stages were recognized on the basis of the connectivity hubs, direction of information flow, increase or decrease in flow, and temporal variation. We suggest that perspective taking in speech compréhension is realized by interactions between the mentalizing network, mirror neuron network, and executive control network. Furthermore, we found that individual différences in the sociality of typically developing adult speakers were systematically related to effective connectivity. In particular, attention switching was deeply concerned with perspective taking in real time, and the precuneus played a crucial rôle in implementing individual différences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Effects of linguistic context and noise type on speech comprehension
- Author
-
Laura P. Fitzgerald, Gayle DeDe, and Jing Shen
- Subjects
speech comprehension ,linguistic context ,speech in noise ,acoustic challenges ,task-evoked pupil response ,speech perception ,Psychology ,BF1-990 - Abstract
IntroductionUnderstanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing.MethodsWe used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits.ResultsWe observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition.DiscussionThese findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
- Published
- 2024
- Full Text
- View/download PDF
12. Children benefit from gestures to understand degraded speech but to a lesser extent than adults
- Author
-
Kazuki Sekine and Aslı Özyürek
- Subjects
co-speech gestures ,degraded speech ,primary school children ,multimodal integration ,speech comprehension ,Psychology ,BF1-990 - Abstract
The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children’s multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.
- Published
- 2024
- Full Text
- View/download PDF
13. Continuous speech segmentation by L1 and L2 speakers of English: the role of syntactic and prosodic cues.
- Author
-
Dobrego, Aleksandra, Konina, Alena, and Mauranen, Anna
- Subjects
- *
SECOND language acquisition , *ENGLISH language , *COMPREHENSION , *MULTILINGUALISM , *LANGUAGE & languages - Abstract
We investigated how native and fluent users of English segment continuous speech and to what extent they use sound-related and structure-related cues. As suggested by the notion of multi-competence, L1 users are not seen as ideal models with perfect command of English, and L2 users not as lacking in competence. We wanted to see how language experience affects speech segmentation. We had participants listen to extracts of spontaneous spoken English and asked them to mark boundaries between speech segments. We found that in chunking authentic speech, both groups made the most use of prosody, with L1 users relying slightly more on it. However, the groups did not differ in segmentation strategies and performed alike in efficiency and agreement. Results show that in line with multi-competence, native speakers do not have an advantage over fluent speakers in higher-level speech processes, and that the outcome of natural speech comprehension does not significantly depend on the different language experiences at high levels of fluency. We suggest that research on speech segmentation should take natural continuous speech on board and investigate fluent users independently of their L1s to gain a more holistic view of processes and consequences of speech segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Extended Preoperative Audiometry for Outcome Prediction and Risk Analysis in Patients Receiving Cochlear Implants.
- Author
-
Rieck, Jan-Henrik, Beyer, Annika, Mewes, Alexander, Caliebe, Amke, and Hey, Matthias
- Subjects
- *
COCHLEAR implants , *AUDIOMETRY , *RISK assessment , *HEARING aids , *SPEECH , *WORD recognition - Abstract
Background: The outcome of cochlear implantation has improved over the last decades, but there are still patients with less benefit. Despite numerous studies examining the cochlear implant (CI) outcome, variations in speech comprehension with CI remains incompletely explained. The aim of this study was therefore to examine preoperative pure-tone audiogram and speech comprehension as well as aetiology, to investigate their relationship with postoperative speech comprehension in CI recipients. Methods: A retrospective study with 664 ears of 530 adult patients was conducted. Correlations between the target variable postoperative word comprehension with the preoperative speech and sound comprehension as well as aetiology were investigated. Significant correlations were inserted into multivariate models. Speech comprehension measured as word recognition score at 70 dB with CI was analyzed as (i) a continuous and (ii) a dichotomous variable. Results: All variables that tested preoperative hearing were significantly correlated with the dichotomous target; with the continuous target, all except word comprehension at 65 dB with hearing aid. The strongest correlation with postoperative speech comprehension was seen for monosyllabic words with hearing aid at 80 dB. The preoperative maximum word comprehension was reached or surpassed by 97.3% of CI patients. Meningitis and congenital diseases were strongly negatively associated with postoperative word comprehension. The multivariate model was able to explain 40% of postoperative variability. Conclusion: Speech comprehension with hearing aid at 80 dB can be used as a supplementary preoperative indicator of CI-aided speech comprehension and should be measured regularly in the clinical routine. Combining audiological and aetiological variables provides more insights into the variability of the CI outcome, allowing for better patient counselling. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. A Parsimonious Look at Neural Oscillations in Speech Perception
- Author
-
Tune, Sarah, Obleser, Jonas, Fay, Richard R., Series Editor, Popper, Arthur N., Series Editor, Avraham, Karen, Editorial Board Member, Bass, Andrew, Editorial Board Member, Cunningham, Lisa, Editorial Board Member, Fritzsch, Bernd, Editorial Board Member, Groves, Andrew, Editorial Board Member, Hertzano, Ronna, Editorial Board Member, Le Prell, Colleen, Editorial Board Member, Litovsky, Ruth, Editorial Board Member, Manis, Paul, Editorial Board Member, Manley, Geoffrey, Editorial Board Member, Moore, Brian, Editorial Board Member, Simmons, Andrea, Editorial Board Member, Yost, William, Editorial Board Member, Holt, Lori L., editor, Peelle, Jonathan E., editor, and Coffin, Allison B., editor
- Published
- 2022
- Full Text
- View/download PDF
16. Predictions in Conversation
- Author
-
Magyari, Lilla, Fitch, Tecumseh, Editorial Board Member, Gärdenfors, Peter, Editorial Board Member, Geurts, Bart, Editorial Board Member, Goodman, Noah D., Editorial Board Member, Ladd, Robert, Editorial Board Member, Lassiter, Dan, Editorial Board Member, Machery, Edouard, Editorial Board Member, Gervain, Judit, editor, Csibra, Gergely, editor, and Kovács, Kristóf, editor
- Published
- 2022
- Full Text
- View/download PDF
17. Decreased Speech Comprehension and Increased Vocal Efforts Among Healthcare Providers Using N95 Mask.
- Author
-
Wadia, Jehaan A and Joshi, Anagha A
- Subjects
- *
N95 respirators , *MEDICAL personnel , *SPEECH , *SPEECH perception , *COVID-19 - Abstract
Aim: N95 masks are recommended for the healthcare providers (HCPs) taking care of patients with coronavirus disease 2019. However, the use of these masks hampers communication. We aimed to evaluate the effect of N95 masks on speech comprehension among listeners and vocal efforts (VEs) of the HCPs. Materials and Methods: This prospective study involved 50 HCPs. We used a single observer with normal hearing to assess the difficulty in comprehension, while VE was estimated in HCPs. The speech reception threshold (SRT), speech discrimination score (SDS), and VEs were evaluated initially without using N95 mask and then repeated with HCPs wearing N95 mask. Results: The use of masks resulted in a statistically significant increase in mean SRT [4.25 (1.65) dB] and VE [2.6 (0.69)], with simultaneous decrease in mean SDS [19.2 (8.77)] (all p-values < 0.0001). Moreover, demographic parameters including age, sex, and profession were not associated with change in SRT, SDS, and VE (all p-values > 0.05). Conclusion: Though use of N95 masks protects the HCPs against the viral infection, it results in decreased speech comprehension and increased VEs. Moreover, these issues are universal among the HCPs and are applicable to the general public as well. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Speech comprehension in noisy environments: Evidence from the predictability effects on the N400 and LPC.
- Author
-
Cheng-Hung Hsin, Pei-Chun Chao, and Chia-Ying Lee
- Subjects
SPEECH ,SPEECH processing systems ,COMPUTATIONAL linguistics ,EVOKED potentials (Electrophysiology) ,ELECTROENCEPHALOGRAPHY - Abstract
Introduction: Speech comprehension involves context-based lexical predictions for efficient semantic integration. This study investigated how noise affects the predictability effect on event-related potentials (ERPs) such as the N400 and late positive component (LPC) in speech comprehension. Methods: Twenty-seven listeners were asked to comprehend sentences in clear and noisy conditions (hereinafter referred to as "clear speech" and "noisy speech," respectively) that ended with a high-or low-predictability word during electroencephalogram (EEG) recordings. Results: The study results regarding clear speech showed the predictability effect on the N400, wherein low-predictability words elicited a larger N400 amplitude than did high-predictability words in the centroparietal and frontocentral regions. Noisy speech showed a reduced and delayed predictability effect on the N400 in the centroparietal regions. Additionally, noisy speech showed a predictability effect on the LPC in the centroparietal regions. Discussion: These findings suggest that listeners achieve comprehension outcomes through different neural mechanisms according to listening conditions. Noisy speech may be comprehended with a second-pass process that possibly functions to recover the phonological form of degraded speech through phonetic reanalysis or repair, thus compensating for decreased predictive efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Expansion in speech time can restore comprehension in a simultaneously speaking bilingual robot
- Author
-
Hamed Pourfannan, Hamed Mahzoon, Yuichiro Yoshikawa, and Hiroshi Ishiguro
- Subjects
bilingual robot ,competing-talker speech ,human-robot interaction ,pause duration ,speech comprehension ,speech expansion ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Introduction: In this study, the development of a social robot, capable of giving speech simultaneously in more than one language was in mind. However, the negative effect of background noise on speech comprehension is well-documented in previous works. This deteriorating effect is more highlighted when the background noise has speech-like properties. Hence, the presence of speech as the background noise in a simultaneously speaking bilingual robot can be fatal for the speech comprehension of each person listening to the robot.Methods: To improve speech comprehension and consequently, user experience in the intended bilingual robot, the effect of time expansion on speech comprehension in a multi-talker speech scenario was investigated. Sentence recognition, speech comprehension, and subjective evaluation tasks were implemented in the study.Results: The obtained results suggest that a reduced speech rate, leading to an expansion in the speech time, in addition to increased pause duration in both the target and background speeches can lead to statistically significant improvement in both sentence recognition, and speech comprehension of participants. More interestingly, participants got a higher score in the time-expanded multi-talker speech than in the standard-speed single-talker speech in the speech comprehension and, in the sentence recognition task. However, this positive effect could not be attributed merely to the time expansion, as we could not repeat the same positive effect in a time-expanded single-talker speech.Discussion: The results obtained in this study suggest a facilitating effect of the presence of the background speech in a simultaneously speaking bilingual robot provided that both languages are presented in a time-expanded manner. The implications of such a simultaneously speaking robot are discussed.
- Published
- 2023
- Full Text
- View/download PDF
20. Individual theta-band cortical entrainment to speech in quiet predicts word-in-noise comprehension.
- Author
-
Becker, Robert and Hervais-Adelman, Alexis
- Subjects
- *
SPEECH processing systems , *SPEECH , *TEMPORAL lobe , *INTELLIGIBILITY of speech , *PREMOTOR cortex - Abstract
Speech elicits brain activity time-locked to its amplitude envelope. The resulting speech-brain synchrony (SBS) is thought to be crucial to speech parsing and comprehension. It has been shown that higher speech-brain coherence is associated with increased speech intelligibility. However, studies depending on the experimental manipulation of speech stimuli do not allow conclusion about the causality of the observed tracking. Here, we investigate whether individual differences in the intrinsic propensity to track the speech envelope when listening to speech-in-quiet is predictive of individual differences in speech-recognition-in-noise, in an independent task. We evaluated the cerebral tracking of speech in source-localized magnetoencephalography, at timescales corresponding to the phrases, words, syllables and phonemes. We found that individual differences in syllabic tracking in right superior temporal gyrus and in left middle temporal gyrus (MTG) were positively associated with recognition accuracy in an independent words-in-noise task. Furthermore, directed connectivity analysis showed that this relationship is partially mediated by top-down connectivity from premotor cortex—associated with speech processing and active sensing in the auditory domain—to left MTG. Thus, the extent of SBS—even during clear speech—reflects an active mechanism of the speech processing system that may confer resilience to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Visual Speech Improves Older and Younger Adults' Response Time and Accuracy for Speech Comprehension in Noise.
- Author
-
Beadle, Julie, Kim, Jeesun, and Davis, Chris
- Subjects
SPEECH perception ,INTELLIGIBILITY of speech ,NOISE ,AUDITORY perception ,TASK performance ,FACIAL expression ,VISUAL perception ,DESCRIPTIVE statistics ,REACTION time ,SENSITIVITY & specificity (Statistics) - Abstract
Past research suggests that older adults expend more cognitive resources when processing visual speech than younger adults. If so, given resource limitations, older adults may not get as large a visual speech benefit as younger ones on a resource-demanding speech processing task. We tested this using a speech comprehension task that required attention across two talkers and a simple response (i.e., the question-and-answer task) and measured response time and accuracy. Specifically, we compared the size of visual speech benefit for older and younger adults. We also examined whether the presence of a visual distractor would reduce the visual speech benefit more for older than younger adults. Twenty-five older adults (12 females, M Age = 72) and 25 younger adults (17 females, M Age = 22) completed the question-and-answer task under time pressure. The task included the following conditions: auditory and visual (AV) speech; AV speech plus visual distractor; and auditory speech with static face images. Both age groups showed a visual speech benefit regardless of whether a visual distractor was also presented. Likewise, the size of the visual speech benefit did not significantly interact with age group for accuracy or the potentially more sensitive response time measure. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. The effect of internet telephony and a cochlear implant accessory on mobile phone speech comprehension in cochlear implant users.
- Author
-
Huth, Markus E., Boschung, Regula L., Caversaccio, Marco D., Wimmer, Wilhelm, and Georgios, Mantokoudis
- Subjects
- *
INTERNET telephony , *COCHLEAR implants , *SPEECH , *CELL phones , *TELEPHONE calls , *COMMUNICATION devices for people with disabilities - Abstract
Purpose: In individuals with severe hearing loss, mobile phone communication is limited despite treatment with a cochlear implant (CI). The goal of this study is to identify the best communication practice for CI users by comparing speech comprehension of conventional mobile phone (GSM) calls, Voice over Internet Protocol (VoIP) calls, and the application of a wireless phone clip (WPC) accessory. Methods: This study included 13 individuals (mean age 47.1 ± 17.3 years) with at least one CI. Frequency response and objective voice quality were tested for each device, transmission mode and the WPC. We measured speech comprehension using a smartphone for a GSM call with and without WPC as well as VoIP-calls with and without WPC at different levels of white background noise. Results: Frequency responses of the WPC were limited (< 4 kHz); however, speech comprehension in a noisy environment was significantly improved compared to GSM. Speech comprehension was improved by 9–27% utilizing VoIP or WPC compared to GSM. WPC was superior in noisy environments (80 dB SPL broadband noise) compared to GSM. At lower background noise levels (50, 60, 70 dB SPL broadband noise), VoIP resulted in improved speech comprehension with and without WPC. Speech comprehension scores did not correlate with objective voice quality measurements. Conclusion: Speech comprehension was best with VoIP alone; however, accessories such as a WPC provide additional improvement in the presence of background noise. Mobile phone calls utilizing VoIP technology, with or without a WPC accessory, result in superior speech comprehension compared to GSM. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. The use of exemplars differs between native and non-native listening.
- Author
-
Nijveld, Annika, ten Bosch, Louis, and Ernestus, Mirjam
- Subjects
- *
SPEECH , *NATIVE language , *LISTENING , *PHONOLOGY , *DUTCH language , *VOWELS , *HUMAN voice - Abstract
This study compares the role of exemplars in native and non-native listening. Two English identity priming experiments were conducted with native English, Dutch non-native, and Spanish non-native listeners. In Experiment 1, primes and targets were spoken in the same or a different voice. Only the native listeners showed exemplar effects. In Experiment 2, primes and targets had the same or a different degree of vowel reduction. The Dutch, but not the Spanish, listeners were familiar with this reduction pattern from their L1 phonology. In this experiment, exemplar effects only arose for the Spanish listeners. We propose that in these lexical decision experiments the use of exemplars is co-determined by listeners' available processing resources, which is modulated by the familiarity with the variation type from their L1 phonology. The use of exemplars differs between native and non-native listening, suggesting qualitative differences between native and non-native speech comprehension processes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Domain-general cognitive motivation: Evidence from economic decision-making – Final Registered Report
- Author
-
Jennifer L. Crawford, Sarah A. Eisenstein, Jonathan E. Peelle, and Todd S. Braver
- Subjects
Cognitive motivation ,Listening effort ,Working memory ,Speech comprehension ,Consciousness. Cognition ,BF309-499 - Abstract
Abstract Stable individual differences in cognitive motivation (i.e., the tendency to engage in and enjoy effortful cognitive activities) have been documented with self-report measures, yet convergent support for a trait-level construct is still lacking. In the present study, we used an innovative decision-making paradigm (COG-ED) to quantify the costs of cognitive effort, a metric of cognitive motivation, across two distinct cognitive domains: working memory (an N-back task) and speech comprehension (understanding spoken sentences in background noise). We hypothesized that cognitive motivation operates similarly within individuals, regardless of domain. Specifically, in 104 adults aged 18–40 years, we tested whether individual differences in effort costs are stable across domains, even after controlling for other potential sources of shared individual variation. Conversely, we evaluated whether the costs of cognitive effort across domains may be better explained in terms of other relevant cognitive and personality-related constructs, such as working memory capacity or reward sensitivity. We confirmed a reliable association among effort costs in both domains, even when these other sources of individual variation, as well as task load, are statistically controlled. Taken together, these results add support for trait-level variation in cognitive motivation impacting effort-based decision making across multiple domains.
- Published
- 2022
- Full Text
- View/download PDF
25. Dissociating prosodic from syntactic delta activity during natural speech comprehension.
- Author
-
Chalas, Nikos, Meyer, Lars, Lo, Chia-Wen, Park, Hyojin, Kluger, Daniel S., Abbasi, Omid, Kayser, Christoph, Nitsch, Robert, and Gross, Joachim
- Subjects
- *
SPEECH , *MAGNETOENCEPHALOGRAPHY , *PROSODIC analysis (Linguistics) , *SYNTAX (Grammar) , *ELECTROPHYSIOLOGY , *AUDITORY cortex - Abstract
Decoding human speech requires the brain to segment the incoming acoustic signal into meaningful linguistic units, ranging from syllables and words to phrases. Integrating these linguistic constituents into a coherent percept sets the root of compositional meaning and hence understanding. One important cue for segmentation in natural speech is prosodic cues, such as pauses, but their interplay with higher-level linguistic processing is still unknown. Here, we dissociate the neural tracking of prosodic pauses from the segmentation of multi-word chunks using magnetoencephalography (MEG). We find that manipulating the regularity of pauses disrupts slow speech-brain tracking bilaterally in auditory areas (below 2 Hz) and in turn increases left-lateralized coherence of higher-frequency auditory activity at speech onsets (around 25–45 Hz). Critically, we also find that multi-word chunks—defined as short, coherent bundles of inter-word dependencies—are processed through the rhythmic fluctuations of low-frequency activity (below 2 Hz) bilaterally and independently of prosodic cues. Importantly, low-frequency alignment at chunk onsets increases the accuracy of an encoding model in bilateral auditory and frontal areas while controlling for the effect of acoustics. Our findings provide novel insights into the neural basis of speech perception, demonstrating that both acoustic features (prosodic cues) and abstract linguistic processing at the multi-word timescale are underpinned independently by low-frequency electrophysiological brain activity in the delta frequency range. • We identified coherent word bundles in a story based on inter-word dependencies • Onsets of contextual speech chunks were dissociated from their acoustic markers • We show evidence of speech chunks processing, independently of their acoustic onsets • Delta activity in auditory cortices explains both prosodic and syntactic processing Chalas et al. use an acoustically manipulated audiobook and a word-bundle annotation to show that during listening, the linguistic processing of incoming speech (traced in frontal and auditory cortices) interacts with the acoustical processing of the input, as reflected in the low-frequency delta activity of auditory cortices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Words and non-speech sounds access lexical and semantic knowledge differently
- Author
-
Chen, Peiyao, Bartolotti, James, Schroeder, Scott R, Rochanavibhata, Sirada, and Marian, Viorica
- Subjects
speech comprehension ,sound processing ,lexical competition ,semantic competition ,eye-tracking - Abstract
Using an eye-tracking paradigm, we examined the strengthand speed of access to lexical knowledge (e.g., ourrepresentation of the word dog in our mental vocabulary) andsemantic knowledge (e.g., our knowledge that a dog isassociated with a leash) via both spoken words (e.g., “dog”)and characteristic sounds (e.g., a dog’s bark). Results showthat both spoken words and characteristic sounds activatelexical and semantic knowledge, but with different patterns.Spoken words activate lexical knowledge faster thancharacteristic sounds do, but with the same strength. Incontrast, characteristic sounds access semantic knowledgestronger than spoken words do, but with the same speed.These findings reveal similarities and differences in theactivation of conceptual knowledge by verbal and non-verbalmeans and advance our understanding of how auditory inputis cognitively processed.
- Published
- 2018
27. 'Test porozumění větám' - Czech version with Standards for Adults
- Author
-
Lucie Nohová and Kateřina Vitásková
- Subjects
test porozumění větám ,standards ,speech-language pathology ,communication disorder ,speech comprehension ,sentence comprehension ,diagnostics ,Medicine ,Oral communication. Speech ,P95-95.6 - Abstract
The article presents "Test porozumění větám", the adapted Czech version of the original Slovak test. The aim of the article is a brief description of the test, the process of adaptation and setting the standards for the Czech version, with some results and discussion. Percentile standards for adults were determined with respect to age (Nohová et al., in preparation). The diagnostic tool is used for an in-depth assessment of spoken language comprehension at the sentence level. It contains 48 sentences with various syntactic constructions and linguistic factors. It can be used in a number of patients - in people with aphasia, cognitive-communication disorders and others. Its further use is the subject of follow-up and future research.
- Published
- 2021
- Full Text
- View/download PDF
28. Rational speech comprehension: Interaction between predictability, acoustic signal, and noise
- Author
-
Marjolein Van Os, Jutta Kray, and Vera Demberg
- Subjects
speech comprehension ,background noise ,mishearing ,predictive context ,rational processing ,noisy channel ,Psychology ,BF1-990 - Abstract
IntroductionDuring speech comprehension, multiple sources of information are available to listeners, which are combined to guide the recognition process. Models of speech comprehension posit that when the acoustic speech signal is obscured, listeners rely more on information from other sources. However, these models take into account only word frequency information and local contexts (surrounding syllables), but not sentence-level information. To date, empirical studies investigating predictability effects in noise did not carefully control the tested speech sounds, while the literature investigating the effect of background noise on the recognition of speech sounds does not manipulate sentence predictability. Additionally, studies on the effect of background noise show conflicting results regarding which noise type affects speech comprehension most. We address this in the present experiment.MethodsWe investigate how listeners combine information from different sources when listening to sentences embedded in background noise. We manipulate top-down predictability, type of noise, and characteristics of the acoustic signal, thus creating conditions which differ in the extent to which a specific speech sound is masked in a way that is grounded in prior work on the confusability of speech sounds in noise. Participants complete an online word recognition experiment.Results and discussionThe results show that participants rely more on the provided sentence context when the acoustic signal is harder to process. This is the case even when interactions of the background noise and speech sounds lead to small differences in intelligibility. Listeners probabilistically combine top-down predictions based on context with noisy bottom-up information from the acoustic signal, leading to a trade-off between the different types of information that is dependent on the combination of a specific type of background noise and speech sound.
- Published
- 2022
- Full Text
- View/download PDF
29. [Audiological outcome of bimodal CI users over time and depending on different influencing factors].
- Author
-
Schlegel H, Hartmann S, Kreikemeier S, Dalhoff E, Löwenheim H, and Tropitzsch A
- Subjects
- Humans, Male, Female, Middle Aged, Treatment Outcome, Germany, Aged, Retrospective Studies, Adult, Longitudinal Studies, Cochlear Implantation, Speech Perception, Deafness rehabilitation, Correction of Hearing Impairment methods, Young Adult, Risk Factors, Aged, 80 and over, Cochlear Implants
- Abstract
Background: Hearing-impaired persons with asymmetric hearing loss and a unilateral indication for a cochlear implant (CI) generally benefit from a bimodal hearing solution. The influence of bimodal fitting on speech comprehension (SC) over time has not yet been sufficiently investigated. The present study examines the influence of bimodal fitting on SC in bimodally fitted CI users with postlingual deafness at least 36 months after implantation and analyzes possible influencing factors., Methods: Included in this retrospective longitudinal study were 54 bimodally fitted speech-competent CI users with at least 36 months of CI experience. Audiometric data of these CI users at predefined timepoints were compared., Results: The change in the results of the Freiburg monosyllabic test (FT) over 36 months was significant (p < 5%) for the deafness group at <10 years for both the 65 dB sound pressure level (SPL) and at 80 dB SPL and also significant for the deafness group ≥10 years for 65 dB SPL. In the Oldenburg sentence test (OlSa) there was a highly significant change (p < 0.1%) for S
0 , S0 N0 , and S0 NCI configurations and a very significant change (p < 1%) for S0 NHA (HA: hearing aid). Age at implantation as a possible influencing factor could not be confirmed in the FT. In contrast, the duration of deafness was a negative influencing factor for SC with CI in the FT, whereas a longer duration of deafness was associated with worse results in the FT. The degree of hearing loss in the ear fitted with an HA did not influence SC. The median bimodal benefit (here: difference in SC with bimodal fitting compared to unilateral HA fitting for FT at 65 dB SPL) was 10% over the total study period. For a median of 79% of the test subjects, the bimodal benefit was found over the entire period of 36 months., Conclusion: Over time, SC improves significantly with a CI for the bimodal test subjects. The investigated influencing factors (age, duration of deafness, and degree of hearing loss in the contralateral ear) support the indication for bimodal provision in accordance with the guideline in Germany for cochlear implantation-regardless of age, duration of deafness, and hearing ability of the contralateral ear., (© 2024. The Author(s).)- Published
- 2024
- Full Text
- View/download PDF
30. [Prediction of speech understanding with the transcutaneous partially implantable bone conduction hearing system Osia®. German Version].
- Author
-
Arndt S, Wesarg T, Aschendorff A, Speck I, Hocke T, Jakob TF, and Rauch AK
- Subjects
- Humans, Female, Male, Middle Aged, Adult, Treatment Outcome, Germany, Aged, Reproducibility of Results, Cochlear Implants, Prosthesis Design, Sensitivity and Specificity, Equipment Failure Analysis, Hearing Loss, Conductive diagnosis, Hearing Loss, Conductive surgery, Hearing Loss, Conductive physiopathology, Hearing Loss, Conductive rehabilitation, Hearing Aids, Young Adult, Retrospective Studies, Translating, Bone Conduction physiology, Speech Perception
- Abstract
Background: The active transcutaneous, partially implantable osseointegrated bone conduction system Cochlear™ Osia® (Cochlear, Sydney, Australia) has been approved for use in German-speaking countries since April 2021. The Osia is indicated for patients with conductive (CHL) or mixed hearing loss (MHL) with an average bone conduction (BC) hearing loss of 55 dB or less, or with single-sided deafness (SSD)., Objectives: The aim of this retrospective study was to investigate the prediction of postoperative speech recognition with Osia and to evaluate the speech recognition of patients with MHL and an aided dynamic range of less than 30 dB with Osia., Materials and Methods: Between 2017 and 2022, 29 adult patients were fitted with the Osia, 10 patients (11 ears) with CHL and 19 patients (21 ears) with MHL. MHL was subdivided into two groups: MHL‑I with four-frequency pure-tone average in BC (BC-4PTA) ≥ 20 dB HL and < 40 dB HL (n = 15 patients; 20 ears) vs. MHL-II with BC-4PTA ≥ 40 dB HL (n = 4 patients; 5 ears). All patients tested a bone conduction hearing device on a softband preoperatively. Speech intelligibility in quiet was assessed preoperatively using the Freiburg monosyllabic test unaided and with the test system and postoperatively with Osia. The maximum monosyllabic score (mEV) unaided and the monosyllabic score with the test system at 65 dB SPL were correlated with the postoperative monosyllabic score with Osia at 65 dB SPL., Results: Preoperative prediction of postoperative outcome with Osia was better using the mEV than the EV at 65 dB SPL with the test device on the softband. Postoperative EV was most predictive for patients with CHL and least predictive for patients with mixed hearing loss with 4PTA BC ≥ 40 dB HL. For the test device at softband, results tended to show the minimum achievable outcome and the mEV tended to predict the realistically achievable outcome., Conclusion: Osia can be used for the treatment of CHL and MHL within the indication limits. The average preoperative bone conduction hearing threshold also provides an approximate estimate of the postoperative EV with Osia, for which the most accurate prediction is obtained using the preoperative mEV. Prediction accuracy decreases from a BC-4PTA of ≥ 40 dB., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
31. The involvement of the speech production system in prediction during comprehension : an articulatory imaging investigation
- Author
-
Drake, Eleanor Katherine Elizabeth, Corley, Martin, and Schaeffler, Sonja
- Subjects
401 ,speech comprehension ,prediction ,efference ,vocal response latencies ,ultrasound data ,imaging - Abstract
This thesis investigates the effects in speech production of prediction during speech comprehension. The topic is raised by recent theoretical models of speech comprehension, which suggest a more integrated role for speech production and comprehension mechanisms than has previously been posited. The thesis is specifically concerned with the suggestion that during speech comprehension upcoming input is simulated with reference to the listener’s own speech production system by way of efference copy. Throughout this thesis the approach taken is to investigate whether representations elicited during comprehension impact speech production. The representations of interest are those generated endogenously by the listener during prediction of upcoming input. We investigate whether predictions are represented at a form level within the listener’s speech production system. We first present an overview of the relevant literature. We then present details of a picture word interference study undertaken to confirm that the item set employed elicits typical phonological effects within a conventional paradigm in which the competing representation is perceptually available. The main body of the thesis presents evidence concerning the nature of representations arising during prediction, specifically their effect on speech output. We first present evidence from picture naming vocal response latencies. We then complement and extend this with evidence from articulatory imaging, allowing an examination of pre-acoustic aspects of speech production. To investigate effects on speech production as a dynamic motor-activity we employ the Delta method, developed to quantify articulatory variability from EPG and ultrasound recordings. We apply this technique to ultrasound data acquired during mid-sagittal imaging of the tongue and extend the approach to allow us to explore the time-course of articulation during the acoustic response latency period. We investigate whether prediction of another’s speech evokes articulatorily specified activation within the listener’s speech production system The findings presented in this thesis suggest that representations evoked as predictions during speech comprehension do affect speech motor output. However, we found no evidence to suggest that predictions are represented in an articulatorily specified manner. We discuss this conclusion with reference to models of speech production-perception that implicate efference copies in the generation of predictions during speech comprehension.
- Published
- 2017
32. Cochlear Implantation in Hearing-Impaired Elderly: Clinical Challenges and Opportunities to Optimize Outcome.
- Author
-
Illg, Angelika and Lenarz, Thomas
- Subjects
COCHLEAR implants ,OLDER people ,HEARING aids ,OLDER patients ,HEARING disorders - Abstract
Cochlear implant (CI) overall provides a very good outcome, but speech comprehension outcome in the elderly is more variable. Several clinical factors play an important role. The management of residual hearing, the presence of comorbidities, and especially the progression of cognitive decline seem to be the clinical parameters that strongly determine the outcome of cochlear implantation and need to be discussed prospectively in the consultation process with the elderly hearing impaired. In the context of this review article, strategies for dealing with these will be discussed. Timely cochlear implantation should already be considered by hearing aid acousticians or practicing otolaryngologists and communicated or initiated with the patient. This requires intensive cooperation between hearing aid acousticians and experts in the clinic. In addition, residual hearing and comorbidities in the elderly need to be considered to make realistic predictions about speech comprehension with CI. Long-term aftercare and its different implementations should be discussed preoperatively, so that the elderly person with hearing impairments feels well taken care of together with his or her relatives. Elderly patients with hearing impairments benefit most from a CI in terms of speech comprehension if there is a large cochlear coverage (electrical or acoustic electrical) and the therapy is not hampered by comorbidities, especially cognitive decline. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Comparing In-ear EOG for Eye-Movement Estimation With Eye-Tracking: Accuracy, Calibration, and Speech Comprehension.
- Author
-
Skoglund, Martin A., Andersen, Martin, Shiell, Martha M., Keidser, Gitte, Rank, Mike Lind, and Rotger-Griful, Sergi
- Subjects
EYE tracking ,ASSISTIVE listening systems ,ASSISTIVE technology ,CALIBRATION ,EYE movements ,READING intervention ,SPEECH ,READING comprehension - Abstract
This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants’ comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Altered Coupling Between Cerebral Blood Flow and Voxel-Mirrored Homotopic Connectivity Affects Stroke-Induced Speech Comprehension Deficits.
- Author
-
Zhang, Jie, Shang, Desheng, Ye, Jing, Ling, Yi, Zhong, Shuchang, Zhang, Shuangshuang, Zhang, Wei, Zhang, Li, Yu, Yamei, He, Fangping, Ye, Xiangming, and Luo, Benyan
- Subjects
STROKE ,CEREBRAL circulation ,INTELLIGIBILITY of speech ,SPEECH disorders ,FUNCTIONAL connectivity ,MAGNETIC resonance imaging ,REGRESSION analysis ,APHASIA ,T-test (Statistics) ,PEARSON correlation (Statistics) ,DATA analysis software ,LONGITUDINAL method ,DISEASE complications - Abstract
The neurophysiological basis of the association between interhemispheric connectivity and speech comprehension processing remains unclear. This prospective study examined regional cerebral blood flow (CBF), homotopic functional connectivity, and neurovascular coupling, and their effects on comprehension performance in post-stroke aphasia. Multimodal imaging data (including data from functional magnetic resonance imaging and arterial spin labeling imaging) of 19 patients with post-stroke aphasia and 22 healthy volunteers were collected. CBF, voxel-mirrored homotopic connectivity (VMHC), CBF-VMHC correlation, and CBF/VMHC ratio maps were calculated. Between-group comparisons were performed to identify neurovascular changes, and correlation analyses were conducted to examine their relationship with the comprehension domain. The correlation between CBF and VMHC of the global gray matter decreased in patients with post-stroke aphasia. The total speech comprehension score was significantly associated with VMHC in the peri-Wernicke area [posterior superior temporal sulcus (pSTS): r = 0.748, p = 0.001; rostroventral area 39: r = 0.641, p = 0.008]. The decreased CBF/VMHC ratio was also mainly associated with the peri-Wernicke temporoparietal areas. Additionally, a negative relationship between the mean CBF/VMHC ratio of the cingulate gyrus subregion and sentence-level comprehension was observed (r = −0.658, p = 0.006). These findings indicate the contribution of peri-Wernicke homotopic functional connectivity to speech comprehension and reveal that abnormal neurovascular coupling of the cingulate gyrus subregion may underly comprehension deficits in patients with post-stroke aphasia. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. DIANA, a Process-Oriented Model of Human Auditory Word Recognition.
- Author
-
ten Bosch, Louis, Boves, Lou, and Ernestus, Mirjam
- Subjects
- *
WORD recognition , *TEMPORAL lobe , *SPEECH , *PSYCHOLINGUISTICS - Abstract
This article presents DIANA, a new, process-oriented model of human auditory word recognition, which takes as its input the acoustic signal and can produce as its output word identifications and lexicality decisions, as well as reaction times. This makes it possible to compare its output with human listeners' behavior in psycholinguistic experiments. DIANA differs from existing models in that it takes more available neuro-physiological evidence on speech processing into account. For instance, DIANA accounts for the effect of ambiguity in the acoustic signal on reaction times following the Hick–Hyman law and it interprets the acoustic signal in the form of spectro-temporal receptive fields, which are attested in the human superior temporal gyrus, instead of in the form of abstract phonological units. The model consists of three components: activation, decision and execution. The activation and decision components are described in detail, both at the conceptual level (in the running text) and at the computational level (in the Appendices). While the activation component is independent of the listener's task, the functioning of the decision component depends on this task. The article also describes how DIANA could be improved in the future in order to even better resemble the behavior of human listeners. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Reduced neural tracking of speech linguistic structures in children.
- Author
-
Kong, Lingzhi, Wang, Mengying, Wu, Danni, and Lu, Lingxi
- Subjects
- *
SPEECH , *NEUROLINGUISTICS , *MAGNETOENCEPHALOGRAPHY - Abstract
The adult brain can efficiently track both lower‐level (i.e., syllable) and higher‐level (i.e., phrase) linguistic structures to comprehend speech. When children actively or passively listened to speech, we found robust neural tracking of syllabic structure but marginally significant tracking of phrasal structure. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Cochlear Implantation in Hearing-Impaired Elderly: Clinical Challenges and Opportunities to Optimize Outcome
- Author
-
Angelika Illg and Thomas Lenarz
- Subjects
cochlear implant ,elderly ,age ,cognitive decline ,comorbidities ,speech comprehension ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Cochlear implant (CI) overall provides a very good outcome, but speech comprehension outcome in the elderly is more variable. Several clinical factors play an important role. The management of residual hearing, the presence of comorbidities, and especially the progression of cognitive decline seem to be the clinical parameters that strongly determine the outcome of cochlear implantation and need to be discussed prospectively in the consultation process with the elderly hearing impaired. In the context of this review article, strategies for dealing with these will be discussed. Timely cochlear implantation should already be considered by hearing aid acousticians or practicing otolaryngologists and communicated or initiated with the patient. This requires intensive cooperation between hearing aid acousticians and experts in the clinic. In addition, residual hearing and comorbidities in the elderly need to be considered to make realistic predictions about speech comprehension with CI. Long-term aftercare and its different implementations should be discussed preoperatively, so that the elderly person with hearing impairments feels well taken care of together with his or her relatives. Elderly patients with hearing impairments benefit most from a CI in terms of speech comprehension if there is a large cochlear coverage (electrical or acoustic electrical) and the therapy is not hampered by comorbidities, especially cognitive decline.
- Published
- 2022
- Full Text
- View/download PDF
38. Altered Coupling Between Cerebral Blood Flow and Voxel-Mirrored Homotopic Connectivity Affects Stroke-Induced Speech Comprehension Deficits
- Author
-
Jie Zhang, Desheng Shang, Jing Ye, Yi Ling, Shuchang Zhong, Shuangshuang Zhang, Wei Zhang, Li Zhang, Yamei Yu, Fangping He, Xiangming Ye, and Benyan Luo
- Subjects
ischemic stroke ,speech comprehension ,cerebral blood flow ,arterial spin labeling ,homotopic connectivity ,neurovascular coupling ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The neurophysiological basis of the association between interhemispheric connectivity and speech comprehension processing remains unclear. This prospective study examined regional cerebral blood flow (CBF), homotopic functional connectivity, and neurovascular coupling, and their effects on comprehension performance in post-stroke aphasia. Multimodal imaging data (including data from functional magnetic resonance imaging and arterial spin labeling imaging) of 19 patients with post-stroke aphasia and 22 healthy volunteers were collected. CBF, voxel-mirrored homotopic connectivity (VMHC), CBF-VMHC correlation, and CBF/VMHC ratio maps were calculated. Between-group comparisons were performed to identify neurovascular changes, and correlation analyses were conducted to examine their relationship with the comprehension domain. The correlation between CBF and VMHC of the global gray matter decreased in patients with post-stroke aphasia. The total speech comprehension score was significantly associated with VMHC in the peri-Wernicke area [posterior superior temporal sulcus (pSTS): r = 0.748, p = 0.001; rostroventral area 39: r = 0.641, p = 0.008]. The decreased CBF/VMHC ratio was also mainly associated with the peri-Wernicke temporoparietal areas. Additionally, a negative relationship between the mean CBF/VMHC ratio of the cingulate gyrus subregion and sentence-level comprehension was observed (r = −0.658, p = 0.006). These findings indicate the contribution of peri-Wernicke homotopic functional connectivity to speech comprehension and reveal that abnormal neurovascular coupling of the cingulate gyrus subregion may underly comprehension deficits in patients with post-stroke aphasia.
- Published
- 2022
- Full Text
- View/download PDF
39. Comparing In-ear EOG for Eye-Movement Estimation With Eye-Tracking: Accuracy, Calibration, and Speech Comprehension
- Author
-
Martin A. Skoglund, Martin Andersen, Martha M. Shiell, Gitte Keidser, Mike Lind Rank, and Sergi Rotger-Griful
- Subjects
EOG ,audio-visual ,speech comprehension ,eye-tracking ,in-ear EEG ,hearing impairment ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants' comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.
- Published
- 2022
- Full Text
- View/download PDF
40. Domain-general cognitive motivation: evidence from economic decision-making
- Author
-
Jennifer L. Crawford, Sarah A. Eisenstein, Jonathan E. Peelle, and Todd S. Braver
- Subjects
Cognitive motivation ,Listening effort ,Working memory ,Speech comprehension ,Consciousness. Cognition ,BF309-499 - Abstract
Abstract Stable individual differences in cognitive motivation (i.e., the tendency to engage in and enjoy effortful cognitive activities) have been documented with self-report measures, yet convergent support for a trait-level construct is still lacking. In the present study, we use an innovative decision-making paradigm (COG-ED) to quantify the costs of cognitive effort, a metric of cognitive motivation, across two distinct cognitive domains (working memory and speech comprehension). We hypothesize that cognitive motivation operates similarly within individuals, regardless of domain. Specifically, we test whether individual differences in effort costs are stable across domains, even after controlling for other potential sources of shared individual variation. Conversely, we evaluate whether the costs of cognitive effort across domains may be better explained in terms of other relevant cognitive and personality-related constructs, such as working memory capacity or reward sensitivity.
- Published
- 2021
- Full Text
- View/download PDF
41. Preparatory delta phase response is correlated with naturalistic speech comprehension performance.
- Author
-
Li, Jiawei, Hong, Bo, Nolte, Guido, Engel, Andreas K., and Zhang, Dan
- Abstract
While human speech comprehension is thought to be an active process that involves top-down predictions, it remains unclear how predictive information is used to prepare for the processing of upcoming speech information. We aimed to identify the neural signatures of the preparatory processing of upcoming speech. Participants selectively attended to one of two competing naturalistic, narrative speech streams, and a temporal response function (TRF) method was applied to derive event-related-like neural responses from electroencephalographic data. The phase responses to the attended speech at the delta band (1–4 Hz) were correlated with the comprehension performance of individual participants, with a latency of − 200–0 ms relative to the onset of speech amplitude envelope fluctuations over the fronto-central and left-lateralized parietal electrodes. The phase responses to the attended speech at the alpha band also correlated with comprehension performance but with a latency of 650–980 ms post-onset over the fronto-central electrodes. Distinct neural signatures were found for the attentional modulation, taking the form of TRF-based amplitude responses at a latency of 240–320 ms post-onset over the left-lateralized fronto-central and occipital electrodes. Our findings reveal how the brain gets prepared to process an upcoming speech in a continuous, naturalistic speech context. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Domain-general cognitive motivation: Evidence from economic decision-making – Final Registered Report.
- Author
-
Crawford, Jennifer L., Eisenstein, Sarah A., Peelle, Jonathan E., and Braver, Todd S.
- Subjects
REWARD (Psychology) ,MOTIVATION (Psychology) ,COGNITION ,SHORT-term memory ,DECISION making - Abstract
Stable individual differences in cognitive motivation (i.e., the tendency to engage in and enjoy effortful cognitive activities) have been documented with self-report measures, yet convergent support for a trait-level construct is still lacking. In the present study, we used an innovative decision-making paradigm (COG-ED) to quantify the costs of cognitive effort, a metric of cognitive motivation, across two distinct cognitive domains: working memory (an N-back task) and speech comprehension (understanding spoken sentences in background noise). We hypothesized that cognitive motivation operates similarly within individuals, regardless of domain. Specifically, in 104 adults aged 18–40 years, we tested whether individual differences in effort costs are stable across domains, even after controlling for other potential sources of shared individual variation. Conversely, we evaluated whether the costs of cognitive effort across domains may be better explained in terms of other relevant cognitive and personality-related constructs, such as working memory capacity or reward sensitivity. We confirmed a reliable association among effort costs in both domains, even when these other sources of individual variation, as well as task load, are statistically controlled. Taken together, these results add support for trait-level variation in cognitive motivation impacting effort-based decision making across multiple domains. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Audibility emphasis of low-level sounds improves consonant identification while preserving vowel identification for cochlear implant users.
- Author
-
Goldsworthy, Raymond L., Bissmeyer, Susan R.S., and Swaminathan, Jayaganesh
- Subjects
- *
SPEECH perception , *COCHLEAR implants , *CONSONANTS , *VOWELS , *ORAL communication , *SIGNAL-to-noise ratio - Abstract
• Audibility emphasis of low-level sounds improves consonant identification in noise for cochlear implant users without affecting vowel identification. • Audibility emphasis can be used to improve consonant identification for cochlear implant users for telecommunications scenarios such as video calls for which the target speech can be separately processed from the background noise. Consonant perception is challenging for listeners with hearing loss, and transmission of speech over communication channels further deteriorates the acoustics of consonants. Part of the challenge arises from the short-term low energy spectro-temporal profile of consonants (for example, relative to vowels). We hypothesized that an audibility enhancement approach aimed at boosting the energy of low-level sounds would improve identification of consonants without diminishing vowel identification. We tested this hypothesis with 11 cochlear implant users, who completed an online listening experiment remotely using the media device and implant settings that they most commonly use when making video calls. Loudness growth and detection thresholds were measured for pure tone stimuli to characterize the relative loudness of test conditions. Consonant and vowel identification were measured in quiet and in speech-shaped noise for progressively difficult signal-to-noise ratios (+12, +6, 0, -6 dB SNR). These conditions were tested with and without an audibility-emphasis algorithm designed to enhance consonant identification at the source. The results show that the algorithm improves consonant identification in noise for cochlear implant users without diminishing vowel identification. We conclude that low-level emphasis of audio can improve speech recognition for cochlear implant users in the case of video calls or other telecommunications where the target speech can be preprocessed separately from environmental noise. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Effects of contextual support on preschoolers’ accented speech comprehension
- Author
-
Creel, Sarah C, Rojo, Dolly P, and Paullada, Angelica Nicolle
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Psychology ,Clinical Research ,Behavioral and Social Science ,Basic Behavioral and Social Science ,Pediatric ,Rehabilitation ,California ,Child ,Preschool ,Comprehension ,Female ,Humans ,Linguistics ,Male ,Speech ,Speech Perception ,Accent ,Accented speech ,Speech comprehension ,Word recognition ,Adverse listening conditions ,Eye tracking ,Cognitive Sciences ,Experimental Psychology ,Applied and developmental psychology ,Biological psychology ,Social and personality psychology - Abstract
Young children often hear speech in unfamiliar accents, but relatively little research characterizes their comprehension capacity. The current study tested preschoolers' comprehension of familiar-accented versus unfamiliar-accented speech with varying levels of contextual support from sentence frames (full sentences vs. isolated words) and from visual context (four salient pictured alternatives vs. the absence of salient visual referents). The familiar accent advantage was more robust when visual context was absent, suggesting that previous findings of good accent comprehension in infants and young children may result from ceiling effects in easier tasks (e.g., picture fixation, picture selection) relative to the more difficult tasks often used with older children and adults. In contrast to prior work on mispronunciations, where most errors were novel object responses, children in the current study did not select novel object referents above chance levels. This suggests that some property of accented speech may dissuade children from inferring that an unrecognized familiar-but-accented word has a novel referent. Finally, children showed detectable accent processing difficulty despite presumed incidental community exposure. Results suggest that preschoolers' accented speech comprehension is still developing, consistent with theories of protracted development of speech processing.
- Published
- 2016
45. The Neural Correlates of Spoken Sentence Comprehension in the Chinese Language: An fMRI Study
- Author
-
Liu H and Chen SHA
- Subjects
chinese ,character-based languages ,auditory semantic network ,spoken sentences ,speech comprehension ,Psychology ,BF1-990 ,Industrial psychology ,HF5548.7-5548.85 - Abstract
Hengshuang Liu,1,2 SH Annabel Chen2– 5 1Bilingual Cognition and Development Lab, National Key Research Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, Guangzhou, People’s Republic of China; 2Psychology, School of Social Sciences (SSS), Nanyang Technological University, Singapore; 3Centre for Research and Development in Learning (CRADLE), Nanyang Technological University, Singapore; 4Lee Kong Chian School of Medicine (LKCMedicine), Nanyang Technological University, Singapore; 5National Institute of Education, Nanyang Technological University, SingaporeCorrespondence: SH Annabel Chen Email annabelchen@ntu.edu.sgPurpose: Everyday social communication emphasizes speech comprehension. To date, most neurobiological models regarding auditory semantic processing are based on alphabetic languages, where the character-based languages such as Chinese are largely underrepresented. Thus, the current study attempted to investigate the neural network of speech comprehension specifically for the Chinese language.Methods: Twenty-two native Mandarin Chinese speakers were imaged while performing a passive listening task of forward and backward sentences. Sentences were used as task stimuli, as sentences compared with words were more frequently utilized in daily speech comprehension.Results: Our results suggested that spoken Chinese sentence comprehension may involve a neural network comprising the left middle temporal gyrus, the left anterior temporal lobe, and the bilateral posterior superior temporal lobes. The occipitotemporal visual cortex was not found to be significantly involved with the sentence-level network of spoken Chinese comprehension, as bottom-up visualization process from homophones to visual forms may be less needed due to the availability of top–down contextual controls in sentence processing. In addition, no significant functional connectivity was observed, likely obscured by the low cognitive demand of the task conditions. Limitations and future directions were discussed.Conclusion: The current Chinese network seems to largely resemble the auditory semantic network for alphabetic languages but with features specific to Chinese. While the left inferior parietal lobule in the dorsal stream may have little involvement in the listening comprehension of Chinese sentences, the ventral neural stream via the temporal cortex appears to be more emphasized. The current findings deepen our understanding of how the semantic nature of spoken Chinese sentences influences the neural mechanism engaged.Keywords: Chinese, character-based languages, auditory semantic network, spoken sentences, speech comprehension
- Published
- 2020
46. Lautliche Wechselwirkung im Berndeutschen
- Author
-
Olena Hawrysch
- Subjects
berndeutsch ,zürichdeutsch ,tempobeschleunigung ,vokalreduktion ,konsonanten-abschwächung ,bernese german dialect ,zurich german dialect ,speech comprehension ,vowel reduction ,consonant weakening ,Philology. Linguistics ,P1-1091 ,German literature ,PT1-4897 - Abstract
Der Beitrag ist der Erforschung von assimilativen Prozessen in der berndeutschen Phonetik gewidmet. Es wird auf den dynamischen Charakter des Lautsystems vom modernen Berndeutschen hingewiesen, laut dem die Realisierung von berndeutschen Vokalen und Konsonanten immer mehr sowohl von den anderen Schweizer Mundarten, wie etwa dem Zürichdeutschen, als auch von der deutschen Hochsprache beeinflusst wird. Das berndeutsche Vokal – und Konsonantensystem entwickelt sich heutzutage nach solchen universellen Prinzipien wie Sprechökonomie und Tempobeschleunigung und unterliegt inso-fern solchen Modifikationen wie Lautabschwächung, Koartikulation, Umverteilung von Silbengrenzen. Zugleich existieren gewisse Besonderheiten bei der Realisation von den berndeutschen Lauten, die mit den Aussprachegewohnheiten von den Bewohnern der berndeutschen Sprachgebiete zusammenhän-gen. Dazu gehören die totale Entstimmlichung von halbstimmhaften Konsonanten, das Vorkommen von Übergangskonsonanten in der intervokalischen Position an den Silben – und Wortgrenzen und eine hohe funktionelle Belastung des vokalischen Allophons vom Phonem /l/. The article is dedicated to studying the assimilative processes in the Bernese German phonetics. It points out the dynamic character of the modern Bernese German sound system and shows that the realization of the Bernese German vowels and consonants is influenced by both, the other Swiss German dialects especially the Zurich German on the one hand and the Standard German language on the other. The Ber-nese German vowel and consonant system develops in accordance with the universal principles such as saving of pronunciation efforts and acceleration of the speech tempo. Thus, it is subject to modifications like sound weakening, co-articulation or redistribution of syllables. At the same time, there still remain specific characteristics of the Bernese sounds realization that are connected to pronunciation habits of Bernese dialect speakers. These include the complete muting of semi-voiced consonants, the use of tran-sient sounds in the intervocalic position at the syllable and word boundaries as well as the high functional strain of the vocalic allophone of the phoneme. /l/.
- Published
- 2020
- Full Text
- View/download PDF
47. Mishearing as a Side Effect of Rational Language Comprehension in Noise.
- Author
-
Van Os, Marjolein, Kray, Jutta, and Demberg, Vera
- Subjects
OLDER people ,COMPUTATIONAL linguistics ,COGNITIVE ability ,SPEECH processing systems ,NOISE - Abstract
Language comprehension in noise can sometimes lead to mishearing, due to the noise disrupting the speech signal. Some of the difficulties in dealing with the noisy signal can be alleviated by drawing on the context – indeed, top-down predictability has shown to facilitate speech comprehension in noise. Previous studies have furthermore shown that strong reliance on the top-down predictions can lead to increased rates of mishearing, especially in older adults, which are attributed to general deficits in cognitive control in older adults. We here propose that the observed mishearing may be a simple consequence of rational language processing in noise. It should not be related to failure on the side of the older comprehenders, but instead would be predicted by rational processing accounts. To test this hypothesis, we extend earlier studies by running an online listening experiment with younger and older adults, carefully controlling the target and direct competitor in our stimuli. We show that mishearing is directly related to the perceptibility of the signal. We furthermore add an analysis of wrong responses, which shows that results are at odds with the idea that participants overly strongly rely on context in this task, as most false answers are indeed close to the speech signal, and not to the semantics of the context. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Mishearing as a Side Effect of Rational Language Comprehension in Noise
- Author
-
Marjolein Van Os, Jutta Kray, and Vera Demberg
- Subjects
speech comprehension ,background noise ,mishearing ,false hearing ,predictive context ,aging ,Psychology ,BF1-990 - Abstract
Language comprehension in noise can sometimes lead to mishearing, due to the noise disrupting the speech signal. Some of the difficulties in dealing with the noisy signal can be alleviated by drawing on the context – indeed, top-down predictability has shown to facilitate speech comprehension in noise. Previous studies have furthermore shown that strong reliance on the top-down predictions can lead to increased rates of mishearing, especially in older adults, which are attributed to general deficits in cognitive control in older adults. We here propose that the observed mishearing may be a simple consequence of rational language processing in noise. It should not be related to failure on the side of the older comprehenders, but instead would be predicted by rational processing accounts. To test this hypothesis, we extend earlier studies by running an online listening experiment with younger and older adults, carefully controlling the target and direct competitor in our stimuli. We show that mishearing is directly related to the perceptibility of the signal. We furthermore add an analysis of wrong responses, which shows that results are at odds with the idea that participants overly strongly rely on context in this task, as most false answers are indeed close to the speech signal, and not to the semantics of the context.
- Published
- 2021
- Full Text
- View/download PDF
49. Opóźniony rozwój mowy a adaptacja do środowiska przedszkolnego.
- Author
-
Tarkowski, Zbigniew and Wójcik, Monika
- Abstract
Copyright of Przegląd Pedagogiczny is the property of Kazimierza Wielki University in Bydgoszcz and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
50. Domain-general cognitive motivation: evidence from economic decision-making.
- Author
-
Crawford, Jennifer L., Eisenstein, Sarah A., Peelle, Jonathan E., and Braver, Todd S.
- Subjects
REWARD (Psychology) ,COGNITION ,DECISION making ,MOTIVATION (Psychology) ,SHORT-term memory - Abstract
Stable individual differences in cognitive motivation (i.e., the tendency to engage in and enjoy effortful cognitive activities) have been documented with self-report measures, yet convergent support for a trait-level construct is still lacking. In the present study, we use an innovative decision-making paradigm (COG-ED) to quantify the costs of cognitive effort, a metric of cognitive motivation, across two distinct cognitive domains (working memory and speech comprehension). We hypothesize that cognitive motivation operates similarly within individuals, regardless of domain. Specifically, we test whether individual differences in effort costs are stable across domains, even after controlling for other potential sources of shared individual variation. Conversely, we evaluate whether the costs of cognitive effort across domains may be better explained in terms of other relevant cognitive and personality-related constructs, such as working memory capacity or reward sensitivity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.