48 results on '"hearing impairment"'
Search Results
2. The development of speechreading skills in Chinese students with hearing impairment.
- Author
-
Fen Zhang, Jianghua Lei, Huina Gong, Hui Wu, and Liang Chen
- Subjects
LIPREADING ,CHINESE students ,HEARING disorders ,AGE groups ,CHINESE language - Abstract
The developmental trajectory of speechreading skills is poorly understood, and existing research has revealed rather inconsistent results. In this study, 209 Chinese students with hearing impairment between 7 and 20 years old were asked to complete the Chinese Speechreading Test targeting three linguistics levels (i.e., words, phrases, and sentences). Both response time and accuracy data were collected and analyzed. Results revealed (i) no developmental change in speechreading accuracy between ages 7 and 14 after which the accuracy rate either stagnates or drops; (ii) no significant developmental pattern in speed of speechreading across all ages. Results also showed that across all age groups, speechreading accuracy was higher for phrases than words and sentences, and overall levels of speechreading speed fell for phrases, words, and sentences. These findings suggest that the development of speechreading in Chinese is not a continuous, linear process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Auditory Perceptual Exercises in Adults Adapting to the Use of Hearing Aids.
- Author
-
Karah, Hanin and Karawani, Hanin
- Subjects
HEARING aids ,SPEECH perception ,SENSORINEURAL hearing loss ,ADULTS ,OLDER people - Abstract
Older adults with age-related hearing loss often use hearing aids (HAs) to compensate. However, certain challenges in speech perception, especially in noise still exist, despite today's HA technology. The current study presents an evaluation of a home-based auditory exercises program that can be used during the adaptation process for HA use. The home-based program was developed at a time when telemedicine became prominent in part due to the COVID-19 pandemic. The study included 53 older adults with age-related symmetrical sensorineural hearing loss. They were divided into three groups depending on their experience using HAs. Group 1: Experienced users (participants who used bilateral HAs for at least 2 years). Group 2: New users (participants who were fitted with bilateral HAs for the first time). Group 3: Non-users. These three groups underwent auditory exercises for 3 weeks. The auditory tasks included auditory detection, auditory discrimination, and auditory identification, as well as comprehension with basic (syllables) and more complex (sentences) stimuli, presented in quiet and in noisy listening conditions. All participants completed self-assessment questionnaires before and after the auditory exercises program and underwent a cognitive test at the end. Self-assessed improvements in hearing ability were observed across the HA users groups, with significant changes described by new users. Overall, speech perception in noise was poorer than in quiet. Speech perception accuracy was poorer in the non-users group compared to the users in all tasks. In sessions where stimuli were presented in quiet, similar performance was observed among new and experienced uses. New users performed significantly better than non-users in all speech in noise tasks; however, compared to the experienced users, performance differences depended on task difficulty. The findings indicate that HA users, even new users, had better perceptual performance than their peers who did not receive hearing aids. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Auditory Perceptual Exercises in Adults Adapting to the Use of Hearing Aids
- Author
-
Hanin Karah and Hanin Karawani
- Subjects
hearing aids ,age-related hearing loss (ARHL) ,speech in noise ,aging ,speech perception ,hearing impairment ,Psychology ,BF1-990 - Abstract
Older adults with age-related hearing loss often use hearing aids (HAs) to compensate. However, certain challenges in speech perception, especially in noise still exist, despite today’s HA technology. The current study presents an evaluation of a home-based auditory exercises program that can be used during the adaptation process for HA use. The home-based program was developed at a time when telemedicine became prominent in part due to the COVID-19 pandemic. The study included 53 older adults with age-related symmetrical sensorineural hearing loss. They were divided into three groups depending on their experience using HAs. Group 1: Experienced users (participants who used bilateral HAs for at least 2 years). Group 2: New users (participants who were fitted with bilateral HAs for the first time). Group 3: Non-users. These three groups underwent auditory exercises for 3 weeks. The auditory tasks included auditory detection, auditory discrimination, and auditory identification, as well as comprehension with basic (syllables) and more complex (sentences) stimuli, presented in quiet and in noisy listening conditions. All participants completed self-assessment questionnaires before and after the auditory exercises program and underwent a cognitive test at the end. Self-assessed improvements in hearing ability were observed across the HA users groups, with significant changes described by new users. Overall, speech perception in noise was poorer than in quiet. Speech perception accuracy was poorer in the non-users group compared to the users in all tasks. In sessions where stimuli were presented in quiet, similar performance was observed among new and experienced uses. New users performed significantly better than non-users in all speech in noise tasks; however, compared to the experienced users, performance differences depended on task difficulty. The findings indicate that HA users, even new users, had better perceptual performance than their peers who did not receive hearing aids.
- Published
- 2022
- Full Text
- View/download PDF
5. Toward an Individual Binaural Loudness Model for Hearing Aid Fitting and Development
- Author
-
Iko Pieper, Manfred Mauermann, Birger Kollmeier, and Stephan D. Ewert
- Subjects
loudness summation ,hearing aid ,hearing impairment ,binaural inhibition ,binaural summation ,binaural loudness summation ,Psychology ,BF1-990 - Abstract
The individual loudness perception of a patient plays an important role in hearing aid satisfaction and use in daily life. Hearing aid fitting and development might benefit from individualized loudness models (ILMs), enabling better adaptation of the processing to individual needs. The central question is whether additional parameters are required for ILMs beyond non-linear cochlear gain loss and linear attenuation common to existing loudness models for the hearing impaired (HI). Here, loudness perception in eight normal hearing (NH) and eight HI listeners was measured in conditions ranging from monaural narrowband to binaural broadband, to systematically assess spectral and binaural loudness summation and their interdependence. A binaural summation stage was devised with empirical monaural loudness judgments serving as input. While NH showed binaural inhibition in line with the literature, binaural summation and its inter-subject variability were increased in HI, indicating the necessity for individualized binaural summation. Toward ILMs, a recent monaural loudness model was extended with the suggested binaural stage, and the number and type of additional parameters required to describe and to predict individual loudness were assessed. In addition to one parameter for the individual amount of binaural summation, a bandwidth-dependent monaural parameter was required to successfully account for individual spectral summation.
- Published
- 2021
- Full Text
- View/download PDF
6. Toward an Individual Binaural Loudness Model for Hearing Aid Fitting and Development.
- Author
-
Pieper, Iko, Mauermann, Manfred, Kollmeier, Birger, and Ewert, Stephan D.
- Subjects
HEARING aid fitting ,LOUDNESS ,PATIENTS' attitudes ,HEARING aids ,HEARING impaired - Abstract
The individual loudness perception of a patient plays an important role in hearing aid satisfaction and use in daily life. Hearing aid fitting and development might benefit from individualized loudness models (ILMs), enabling better adaptation of the processing to individual needs. The central question is whether additional parameters are required for ILMs beyond non-linear cochlear gain loss and linear attenuation common to existing loudness models for the hearing impaired (HI). Here, loudness perception in eight normal hearing (NH) and eight HI listeners was measured in conditions ranging from monaural narrowband to binaural broadband, to systematically assess spectral and binaural loudness summation and their interdependence. A binaural summation stage was devised with empirical monaural loudness judgments serving as input. While NH showed binaural inhibition in line with the literature, binaural summation and its inter-subject variability were increased in HI, indicating the necessity for individualized binaural summation. Toward ILMs, a recent monaural loudness model was extended with the suggested binaural stage, and the number and type of additional parameters required to describe and to predict individual loudness were assessed. In addition to one parameter for the individual amount of binaural summation, a bandwidth-dependent monaural parameter was required to successfully account for individual spectral summation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Tracking Musical Voices in Bach's The Art of the Fugue: Timbral Heterogeneity Differentially Affects Younger Normal-Hearing Listeners and Older Hearing-Aid Users
- Author
-
Kai Siedenburg, Kirsten Goldmann, and Steven van de Par
- Subjects
music perception ,hearing impairment ,auditory scene analysis ,voice leading ,timbre ,Psychology ,BF1-990 - Abstract
Auditory scene analysis is an elementary aspect of music perception, yet only little research has scrutinized auditory scene analysis under realistic musical conditions with diverse samples of listeners. This study probed the ability of younger normal-hearing listeners and older hearing-aid users in tracking individual musical voices or lines in JS Bach's The Art of the Fugue. Five-second excerpts with homogeneous or heterogenous instrumentation of 2–4 musical voices were presented from spatially separated loudspeakers and preceded by a short cue for signaling the target voice. Listeners tracked the cued voice and detected whether an amplitude modulation was imposed on the cued voice or a distractor voice. Results indicated superior performance of young normal-hearing listeners compared to older hearing-aid users. Performance was generally better in conditions with fewer voices. For young normal-hearing listeners, there was interaction between the number of voices and the instrumentation: performance degraded less drastically with an increase in the number of voices for timbrally heterogeneous mixtures compared to homogeneous mixtures. Older hearing-aid users generally showed smaller effects of the number of voices and instrumentation, but no interaction between the two factors. Moreover, tracking performance of older hearing aid users did not differ when these participants did or did not wear hearing aids. These results shed light on the role of timbral differentiation in musical scene analysis and suggest reduced musical scene analysis abilities of older hearing-impaired listeners in a realistic musical scenario.
- Published
- 2021
- Full Text
- View/download PDF
8. Tracking Musical Voices in Bach's The Art of the Fugue : Timbral Heterogeneity Differentially Affects Younger Normal-Hearing Listeners and Older Hearing-Aid Users.
- Author
-
Siedenburg, Kai, Goldmann, Kirsten, and van de Par, Steven
- Subjects
AUDITORY scene analysis ,MUSICAL analysis ,HEARING aids ,CANONS, fugues, etc. ,AMPLITUDE modulation - Abstract
Auditory scene analysis is an elementary aspect of music perception, yet only little research has scrutinized auditory scene analysis under realistic musical conditions with diverse samples of listeners. This study probed the ability of younger normal-hearing listeners and older hearing-aid users in tracking individual musical voices or lines in JS Bach's The Art of the Fugue. Five-second excerpts with homogeneous or heterogenous instrumentation of 2–4 musical voices were presented from spatially separated loudspeakers and preceded by a short cue for signaling the target voice. Listeners tracked the cued voice and detected whether an amplitude modulation was imposed on the cued voice or a distractor voice. Results indicated superior performance of young normal-hearing listeners compared to older hearing-aid users. Performance was generally better in conditions with fewer voices. For young normal-hearing listeners, there was interaction between the number of voices and the instrumentation: performance degraded less drastically with an increase in the number of voices for timbrally heterogeneous mixtures compared to homogeneous mixtures. Older hearing-aid users generally showed smaller effects of the number of voices and instrumentation, but no interaction between the two factors. Moreover, tracking performance of older hearing aid users did not differ when these participants did or did not wear hearing aids. These results shed light on the role of timbral differentiation in musical scene analysis and suggest reduced musical scene analysis abilities of older hearing-impaired listeners in a realistic musical scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Combined Vision and Hearing Difficulties Results in Higher Levels of Depression and Chronic Anxiety: Data From a Large Sample of Spanish Adults
- Author
-
Shahina Pardhan, Lee Smith, Rupert Bourne, Adrian Davis, Nicolas Leveziel, Louis Jacob, Ai Koyanagi, and Guillermo F. López-Sánchez
- Subjects
vision impairment ,hearing impairment ,depression ,anxiety ,sensory impairment ,Psychology ,BF1-990 - Abstract
ObjectiveIndividually, vision and hearing impairments have been linked to higher levels of anxiety and depression. We investigated the effect of dual sensory impairment (difficulty seeing and hearing) in a large representative sample of Spanish adults.MethodsData from a total of 23,089 adults (age range: 15–103 years, 45.9% men) from the Spanish National Health Survey 2017 were analyzed. Self-reported difficulty of seeing and hearing (exposures), and depression and chronic anxiety (outcomes) were analyzed. Multivariable logistic regression was assessed for difficulty with vision alone, hearing alone and with difficulty with both, adjusting for gender, age, marital status, living as a couple, education, smoking, alcohol consumption, BMI, physical activity, use of glasses/contact lenses, and hearing aid.ResultsVisual difficulty, hearing difficulty, and dual difficulties were all associated with significantly higher odds for depression (ORs 2.367, 2.098, and 3.852, respectively) and for chronic anxiety (ORs 1.983, 1.942, and 3.385, respectively). Dual sensory difficulty was associated with higher odds ratios for depression and anxiety when compared to either impairment alone.ConclusionDual sensory difficulty is associated with significantly higher odds of anxiety and depression when compared to either vision or hearing difficulty alone. Appropriate interventions are needed to address any reversible causes of vision and hearing as well as anxiety and depression in people in these specific groups.
- Published
- 2021
- Full Text
- View/download PDF
10. Combined Vision and Hearing Difficulties Results in Higher Levels of Depression and Chronic Anxiety: Data From a Large Sample of Spanish Adults.
- Author
-
Pardhan, Shahina, Smith, Lee, Bourne, Rupert, Davis, Adrian, Leveziel, Nicolas, Jacob, Louis, Koyanagi, Ai, and López-Sánchez, Guillermo F.
- Subjects
UNHEALTHY lifestyles ,SPANIARDS ,ANXIETY ,HEARING disorders ,MENTAL depression ,HEARING - Abstract
Objective: Individually, vision and hearing impairments have been linked to higher levels of anxiety and depression. We investigated the effect of dual sensory impairment (difficulty seeing and hearing) in a large representative sample of Spanish adults. Methods: Data from a total of 23,089 adults (age range: 15–103 years, 45.9% men) from the Spanish National Health Survey 2017 were analyzed. Self-reported difficulty of seeing and hearing (exposures), and depression and chronic anxiety (outcomes) were analyzed. Multivariable logistic regression was assessed for difficulty with vision alone, hearing alone and with difficulty with both, adjusting for gender, age, marital status, living as a couple, education, smoking, alcohol consumption, BMI, physical activity, use of glasses/contact lenses, and hearing aid. Results: Visual difficulty, hearing difficulty, and dual difficulties were all associated with significantly higher odds for depression (ORs 2.367, 2.098, and 3.852, respectively) and for chronic anxiety (ORs 1.983, 1.942, and 3.385, respectively). Dual sensory difficulty was associated with higher odds ratios for depression and anxiety when compared to either impairment alone. Conclusion: Dual sensory difficulty is associated with significantly higher odds of anxiety and depression when compared to either vision or hearing difficulty alone. Appropriate interventions are needed to address any reversible causes of vision and hearing as well as anxiety and depression in people in these specific groups. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss
- Author
-
Mary Rudner, Henrik Danielsson, Björn Lyxell, Thomas Lunner, and Jerker Rönnberg
- Subjects
phonology ,frequency ,hearing impairment ,working memory ,lexico-semantic strategy ,Psychology ,BF1-990 - Abstract
Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.
- Published
- 2019
- Full Text
- View/download PDF
12. Investigation of Psychophysiological and Subjective Effects of Long Working Hours – Do Age and Hearing Impairment Matter?
- Author
-
Verena Wagner-Hartl and K. Wolfgang Kallus
- Subjects
long working hours ,age ,hearing impairment ,cortisol ,psychophysiology ,Psychology ,BF1-990 - Abstract
Following current prognosis, demographic development raises expectations of an aging of the working population. Therefore, keeping employees healthy and strengthening their ability to work, becomes more and more important. When employees become older, dealing with age-related impairments of sensory functions, such as hearing impairment, is a central issue. Recent evidence suggests that negative effects that are associated with reduced hearing can have a strong impact at work. Especially under exhausting working situations such as working overtime hours, age and hearing impairment might influence employees’ well-being. Until now, neither the problem of aged workers and long working hours, nor the problem of hearing impairment and prolonged working time has been addressed explicitly. Therefore, a laboratory study was examined to answer the research question: Do age and hearing impairment have an impact on psychophysiological and subjective effects of long working hours. In total, 51 white-collar workers, aged between 24 and 63 years, participated in the laboratory study. The results show no significant effects for age and hearing impairment on the intensity of subjective consequences (perceived recovery and fatigue, subjective emotional well-being and physical symptoms) of long working hours. However, the psychophysiological response (the saliva cortisol level) to long working hours differs significantly between hearing impaired and normal hearing employees. Interestingly, the results suggest that from a psychophysiological point of view long working hours were more demanding for normal hearing employees.
- Published
- 2018
- Full Text
- View/download PDF
13. Investigation of Psychophysiological and Subjective Effects of Long Working Hours - Do Age and Hearing Impairment Matter?
- Author
-
Wagner-Hartl, Verena and Kallus, K. Wolfgang
- Subjects
PSYCHOPHYSIOLOGY ,WORKING hours ,HEARING disorders ,HYDROCORTISONE ,INDUSTRIAL hygiene - Abstract
Following current prognosis, demographic development raises expectations of an aging of the working population. Therefore, keeping employees healthy and strengthening their ability to work, becomes more and more important. When employees become older, dealing with age-related impairments of sensory functions, such as hearing impairment, is a central issue. Recent evidence suggests that negative effects that are associated with reduced hearing can have a strong impact at work. Especially under exhausting working situations such as working overtime hours, age and hearing impairment might influence employees' well-being. Until now, neither the problem of aged workers and long working hours, nor the problem of hearing impairment and prolonged working time has been addressed explicitly. Therefore, a laboratory study was examined to answer the research question: Do age and hearing impairment have an impact on psychophysiological and subjective effects of long working hours. In total, 51 white-collar workers, aged between 24 and 63 years, participated in the laboratory study. The results show no significant effects for age and hearing impairment on the intensity of subjective consequences (perceived recovery and fatigue, subjective emotional well-being and physical symptoms) of long working hours. However, the psychophysiological response (the saliva cortisol level) to long working hours differs significantly between hearing impaired and normal hearing employees. Interestingly, the results suggest that from a psychophysiological point of view long working hours were more demanding for normal hearing employees. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment
- Author
-
Wycliffe Kabaywe Yumba
- Subjects
aging ,cognition ,speech recognition in noise ,hearing aid ,signal processing algorithms ,hearing impairment ,Psychology ,BF1-990 - Abstract
Previous studies have demonstrated that successful listening with advanced signal processing in digital hearing aids is associated with individual cognitive capacity, particularly working memory capacity (WMC). This study aimed to examine the relationship between cognitive abilities (cognitive processing speed and WMC) and individual listeners’ responses to digital signal processing settings in adverse listening conditions. A total of 194 native Swedish speakers (83 women and 111 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), with bilateral, symmetrical mild to moderate sensorineural hearing loss who had completed a lexical decision speed test (measuring cognitive processing speed) and semantic word-pair span test (SWPST, capturing WMC) participated in this study. The Hagerman test (capturing speech recognition in noise) was conducted using an experimental hearing aid with three digital signal processing settings: (1) linear amplification without noise reduction (NoP), (2) linear amplification with noise reduction (NR), and (3) non-linear amplification without NR (“fast-acting compression”). The results showed that cognitive processing speed was a better predictor of speech intelligibility in noise, regardless of the types of signal processing algorithms used. That is, there was a stronger association between cognitive processing speed and NR outcomes and fast-acting compression outcomes (in steady state noise). We observed a weaker relationship between working memory and NR, but WMC did not relate to fast-acting compression. WMC was a relatively weaker predictor of speech intelligibility in noise. These findings might have been different if the participants had been provided with training and or allowed to acclimatize to binary masking noise reduction or fast-acting compression.
- Published
- 2017
- Full Text
- View/download PDF
15. Editorial: Cognitive Hearing Mechanisms of Language Understanding: Short- and Long-Term Perspectives
- Author
-
Rachel J. Ellis, Patrik Sörqvist, Adriana A. Zekveld, and Jerker Rönnberg
- Subjects
cognitive hearing science ,working memory ,speech perception ,language processing ,hearing impairment ,Psychology ,BF1-990 - Published
- 2017
- Full Text
- View/download PDF
16. Working Memory Capacity as a Factor Influencing the Relationship between Language Outcome and Rehabilitation in Mandarin-Speaking Preschoolers with Congenital Hearing Impairment
- Author
-
Ming Lo and Pei-Hua Chen
- Subjects
working memory ,hearing impairment ,receptive language ,expressive language ,child development ,Mandarin-speaking preschoolers ,Psychology ,BF1-990 - Abstract
Memory processes could account for a significant part of the variance in language performances of hearing-impaired children. However, the circumstance in which the performance of hearing-impaired children can be nearly the same as the performance of hearing children remains relatively little studied. Thus, a group of pre-school children with congenital, bilateral hearing loss and a group of pre-school children with normal hearing were invited to participate in this study. In addition, the hearing-impaired participants were divided into two groups according to their working memory span. A language disorder assessment test for Mandarin-speaking preschoolers was used to measure the outcomes of receptive and expressive language of the two groups of children. The results showed that the high-span group performed as good as the hearing group, while the low-span group showed lower accuracy than the hearing group. A linear mixed-effects analysis showed that not only length of rehabilitation but also the memory span affected the measure of language outcome. Furthermore, the rehabilitation length positively correlated with the measure of expressive language only among the participants of the high-span group. The pattern of the results indicates that working memory capacity is one of the factors that could support the children to acquire age-equivalent language skills.
- Published
- 2017
- Full Text
- View/download PDF
17. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment.
- Author
-
Yumba, Wycliffe Kabaywe
- Subjects
HEARING aids ,SHORT-term memory ,DIGITAL signal processing ,DEAFNESS prevention ,NOISE control - Published
- 2017
- Full Text
- View/download PDF
18. Editorial: Cognitive Hearing Mechanisms of Language Understanding: Short- and Long-Term Perspectives.
- Author
-
Ellis, Rachel J., Sörqvist, Patrik, Zekveld, Adriana A., and Rönnberg, Jerker
- Subjects
SHORT-term memory ,SPEECH perception ,HEARING disorders ,COGNITION ,HEARING - Published
- 2017
- Full Text
- View/download PDF
19. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.
- Author
-
Gieseler, Anja, Tahden, Maike A. S., Thiel, Christiane M., Wagener, Kirsten C., Meis, Markus, and Colonius, Hans
- Subjects
HEARING disorders ,LOUDNESS ,VERBAL ability ,SPEECH perception ,AGING ,COGNITION - Abstract
Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age, and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment
- Author
-
Jana B. Frtusova and Natalie ePhillips
- Subjects
Aging ,Speech Perception ,working memory ,hearing impairment ,Multisensory Interaction ,Even-related potentials ,Psychology ,BF1-990 - Abstract
This study examined the effect of auditory-visual (AV) speech stimuli on working memory in hearing impaired participants (HIP) in comparison to age- and education-matched normal elderly controls (NEC). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioural results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the HIP group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the HIP group showed a more robust AV benefit; however, the NECs showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the HIP to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.
- Published
- 2016
- Full Text
- View/download PDF
21. Oral communication in individuals with hearing impairment – considerations regarding attentional, cognitive and social resources
- Author
-
Ulrike eLemke and Sigrid eScherpiet
- Subjects
Communication ,executive functions ,cognitive aging ,speech comprehension ,hearing impairment ,Third-party disability ,Psychology ,BF1-990 - Abstract
Traditionally, audiology research has focused primarily on hearing and related disorders. In recent years, however, growing interest and insight has developed into the interaction of hearing and cognition. This applies to a person’s listening and speech comprehension ability and the neural realization thereof. The present perspective extends this view to oral communication, when two or more people interact in social context. Specifically, the impact of hearing impairment and cognitive changes with age is discussed.In focus are executive functions, a group of top-down processes that guide attention, thought and action according to goals and intentions. The strategic allocation of the limited cognitive processing capacity among concurrent tasks is often effortful, especially under adverse communication conditions and in old age. Working memory, a sub-function extensively discussed in cognitive hearing science, is here put into the context of other executive and cognitive functions required for oral communication and speech comprehension. Finally, taking an ecological view on hearing impairment, activity limitations and participation restrictions are discussed regarding their psycho-social impact and third-party disability.
- Published
- 2015
- Full Text
- View/download PDF
22. Multiple Solutions to the Same Problem: Utilization of Plausibility and Syntax in Sentence Comprehension by Older Adults with Impaired Hearing.
- Author
-
Amichetti, Nicole M., White, Alison G., and Wingfield, Arthur
- Subjects
PSYCHOLINGUISTICS ,SENTENCES (Grammar) ,SYNTAX (Grammar) ,SEMANTICS research ,READING comprehension - Abstract
A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults' comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for successful sentence comprehension, with the existence of these alternative solutions only revealed when literal syntax and plausibility do not coincide. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment.
- Author
-
Frtusova, Jana B. and Phillips, Natalie A.
- Subjects
LIPREADING ,AUDIOMETRY ,EVOKED potentials (Electrophysiology) ,SPEECH education ,SPEECH processing systems ,ORAL communication - Abstract
This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. Relative clause reading in hearing impairment: Different profiles of syntactic impairment
- Author
-
Ronit eSzterman and Naama eFriedmann
- Subjects
Movement ,reading ,syntax ,Hebrew ,hearing impairment ,relative clauses ,Psychology ,BF1-990 - Abstract
Children with hearing impairment show difficulties in sentences derived by Wh-movement, such as relative clauses and Wh-questions. This study examines the nature of this deficit in 48 hearing impaired children aged 9-12 years and 38 hearing controls. The task involved reading aloud and paraphrasing of object relatives that include a noun-verb heterophonic homograph. The correct pronunciation of the homograph in these sentences depended upon the correct construction of the syntactic structure of the sentence. An analysis of the reading and paraphrasing of each participant exposed two different patterns of syntactic impairment. Some hearing-impaired children paraphrased the object relatives incorrectly but could still read the homograph, indicating impaired assignment of thematic roles alongside good syntactic structure building; other hearing-impaired children could neither read the homograph nor paraphrase the sentence, indicating a structural deficit in the syntactic tree. Further testing of these children confirmed the different impairments: some are impaired only in Wh-movement, whereas others have CP impairment. The syntactic impairment correlated with whether or not a hearing device was fitted by the age of one year, but not with the type of hearing device or the depth of hearing loss: children who had a hearing device fitted during the first year of life had better syntactic abilities than children whose hearing devices were fitted later.
- Published
- 2014
- Full Text
- View/download PDF
25. Hearing Impairment and Audiovisual Speech Integration Ability: A Case Study Report
- Author
-
Nicholas eAltieri and Daniel eHudock
- Subjects
processing speed ,Capacity ,lip-reading ,hearing impairment ,audiovisual speech integration ,speech reading ,Psychology ,BF1-990 - Abstract
Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener’s ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: An open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.
- Published
- 2014
- Full Text
- View/download PDF
26. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.
- Author
-
Jeremy eMarozeau, Hamish eInnes-Brown, and Peter J Blamey
- Subjects
pitch ,cochlear implant ,auditory streaming ,hearing impairment ,loudness ,music training ,Psychology ,BF1-990 - Abstract
Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
- Published
- 2013
- Full Text
- View/download PDF
27. Erratum: Early ERP signature of hearing impairment in visual rhyme judgment.
- Author
-
Elisabet eClasson
- Subjects
N2 ,phonology ,Event-related potentials ,N400 ,hearing impairment ,Inter-stimulus interval ,Psychology ,BF1-990 - Published
- 2013
- Full Text
- View/download PDF
28. Early ERP signature of hearing impairment in visual rhyme judgment
- Author
-
Elisabet eClasson, Mary eRudner, Mikael eJohansson, and Jerker eRönnberg
- Subjects
N2 ,phonology ,Event-related potentials ,N400 ,hearing impairment ,Inter-stimulus interval ,Psychology ,BF1-990 - Abstract
Postlingually acquired hearing impairment is associated with changes in the representation of sound in semantic long-term memory. An indication of this is the lower performance on visual rhyme judgment tasks in conditions where phonological and orthographic cues mismatch, requiring high reliance on phonological representations. In this study, event-related potentials (ERPs) were used for the first time to investigate the neural correlates of phonological processing in visual rhyme judgments in participants with acquired hearing impairment (HI) and normal hearing (NH). Rhyme task word pairs rhymed or not and had matching or mismatching orthography. In addition, the interstimulus-interval (ISI) was manipulated to be either long (800 ms) or short (50 ms). Long ISIs allow for engagement of explicit, top-down processes, while short ISIs limit the involvement of such mechanisms. We hypothesized lower behavioural performance and N400 and N2 deviations in HI in the mismatching rhyme judgment conditions, particularly in short ISI. However, the results showed a different pattern. As expected, behavioural performance in the mismatch conditions was lower in HI than in NH in short ISI, but ERPs did not differ across groups. In contrast, HI performed on a par with NH in long ISI. Further, HI, but not NH, showed an amplified N2-like response in the non-rhyming, orthographically mismatching condition in long ISI. This was also the rhyme condition in which participants in both groups benefited the most from the possibility to engage top-down processes afforded with the longer ISI. Taken together, these results indicate an early ERP signature of hearing impairment in this challenging phonological task, likely reflecting use of a compensatory strategy. This strategy is suggested to involve increased reliance on explicit mechanisms such as articulatory recoding and grapheme-to-phoneme conversion.
- Published
- 2013
- Full Text
- View/download PDF
29. Relative clause reading in hearing impairment: different profiles of syntactic impairment.
- Author
-
Szterman, Ronit and Friedmann, Naama
- Subjects
RELATIVE clauses ,HEARING disorders ,ORAL reading ,PARAPHRASE ,HOMONYMS - Abstract
Children with hearing impairment show difficulties in sentences derived by Wh-movement, such as relative clauses and Wh-questions. This study examines the nature of this deficit in 48 hearing impaired children aged 9-12 years and 38 hearing controls. The task involved reading aloud and paraphrasing of object relatives that include a noun-verb heterophonic homograph. The correct pronunciation of the homograph in these sentences depended upon the correct construction of the syntactic structure of the sentence. An analysis of the reading and paraphrasing of each participant exposed two different patterns of syntactic impairment. Some hearing-impaired children paraphrased the object relatives incorrectly but could still read the homograph, indicating impaired assignment of thematic roles alongside good syntactic structure building; other hearingimpaired children could neither read the homograph nor paraphrase the sentence, indicating a structural deficit in the syntactic tree. Further testing of these children confirmed the different impairments: some are impaired only in Wh-movement, whereas others have CP impairment. The syntactic impairment correlated with whether or not a hearing device was fitted by the age of 1 year, but not with the type of hearing device or the depth of hearing loss: children who had a hearing device fitted during the first year of life had better syntactic abilities than children whose hearing devices were fitted later. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. Tracking Musical Voices in Bach's
- Author
-
Kai Siedenburg, Kirsten Goldmann, and Steven van de Par
- Subjects
lcsh:Psychology ,music perception ,timbre ,auditory scene analysis ,lcsh:BF1-990 ,Psychology ,hearing impairment ,voice leading ,Original Research - Abstract
Auditory scene analysis is an elementary aspect of music perception, yet only little research has scrutinized auditory scene analysis under realistic musical conditions with diverse samples of listeners. This study probed the ability of younger normal-hearing listeners and older hearing-aid users in tracking individual musical voices or lines in JS Bach's The Art of the Fugue. Five-second excerpts with homogeneous or heterogenous instrumentation of 2–4 musical voices were presented from spatially separated loudspeakers and preceded by a short cue for signaling the target voice. Listeners tracked the cued voice and detected whether an amplitude modulation was imposed on the cued voice or a distractor voice. Results indicated superior performance of young normal-hearing listeners compared to older hearing-aid users. Performance was generally better in conditions with fewer voices. For young normal-hearing listeners, there was interaction between the number of voices and the instrumentation: performance degraded less drastically with an increase in the number of voices for timbrally heterogeneous mixtures compared to homogeneous mixtures. Older hearing-aid users generally showed smaller effects of the number of voices and instrumentation, but no interaction between the two factors. Moreover, tracking performance of older hearing aid users did not differ when these participants did or did not wear hearing aids. These results shed light on the role of timbral differentiation in musical scene analysis and suggest reduced musical scene analysis abilities of older hearing-impaired listeners in a realistic musical scenario.
- Published
- 2020
31. The development of speechreading skills in Chinese students with hearing impairment.
- Author
-
Zhang F, Lei J, Gong H, Wu H, and Chen L
- Abstract
The developmental trajectory of speechreading skills is poorly understood, and existing research has revealed rather inconsistent results. In this study, 209 Chinese students with hearing impairment between 7 and 20 years old were asked to complete the Chinese Speechreading Test targeting three linguistics levels (i.e., words, phrases, and sentences). Both response time and accuracy data were collected and analyzed. Results revealed (i) no developmental change in speechreading accuracy between ages 7 and 14 after which the accuracy rate either stagnates or drops; (ii) no significant developmental pattern in speed of speechreading across all ages. Results also showed that across all age groups, speechreading accuracy was higher for phrases than words and sentences, and overall levels of speechreading speed fell for phrases, words, and sentences. These findings suggest that the development of speechreading in Chinese is not a continuous, linear process., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Zhang, Lei, Gong, Wu and Chen.)
- Published
- 2022
- Full Text
- View/download PDF
32. Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss
- Author
-
Rudner, Mary, Danielsson, Henrik, Lyxell, Björn, Lunner, Thomas, and Rönnberg, Jerker
- Subjects
Psychology (excluding Applied Psychology) ,phonology ,Psykologi (exklusive tillämpad psykologi) ,frequency ,Psychology ,hearing impairment ,working memory ,lexico-semantic strategy ,General Psychology ,Original Research - Abstract
Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome. Funding Agencies|Swedish Research Council [2017-06092]; Linnaeus Centre HEAD grant from the Swedish Research Council [349-2007-8654]
- Published
- 2019
- Full Text
- View/download PDF
33. Working Memory Capacity as a Factor Influencing the Relationship between Language Outcome and Rehabilitation in Mandarin-Speaking Preschoolers with Congenital Hearing Impairment
- Author
-
Pei-Hua Chen and Ming Lo
- Subjects
Mandarin-speaking preschoolers ,medicine.medical_specialty ,medicine.medical_treatment ,lcsh:BF1-990 ,Audiology ,Mandarin Chinese ,050105 experimental psychology ,working memory ,Developmental psychology ,03 medical and health sciences ,0302 clinical medicine ,Factor (programming language) ,Memory span ,medicine ,otorhinolaryngologic diseases ,expressive language ,Psychology ,0501 psychology and cognitive sciences ,Language disorder ,General Psychology ,computer.programming_language ,Original Research ,child development ,Rehabilitation ,Working memory ,05 social sciences ,hearing impairment ,medicine.disease ,Child development ,language.human_language ,Test (assessment) ,lcsh:Psychology ,language ,receptive language ,computer ,030217 neurology & neurosurgery - Abstract
Memory processes could account for a significant part of the variance in language performances of hearing-impaired children. However, the circumstance in which the performance of hearing-impaired children can be nearly the same as the performance of hearing children remains relatively little studied. Thus, a group of pre-school children with congenital, bilateral hearing loss and a group of pre-school children with normal hearing were invited to participate in this study. In addition, the hearing-impaired participants were divided into two groups according to their working memory span. A language disorder assessment test for Mandarin-speaking preschoolers was used to measure the outcomes of receptive and expressive language of the two groups of children. The results showed that the high-span group performed as good as the hearing group, while the low-span group showed lower accuracy than the hearing group. A linear mixed-effects analysis showed that not only length of rehabilitation but also the memory span affected the measure of language outcome. Furthermore, the rehabilitation length positively correlated with the measure of expressive language only among the participants of the high-span group. The pattern of the results indicates that working memory capacity is one of the factors that could support the children to acquire age-equivalent language skills.
- Published
- 2017
34. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment
- Author
-
Wycliffe Kabaywe, Yumba
- Subjects
cognition ,speech recognition in noise ,aging ,Psychology ,hearing impairment ,hearing aid ,signal processing algorithms ,Original Research - Abstract
Previous studies have demonstrated that successful listening with advanced signal processing in digital hearing aids is associated with individual cognitive capacity, particularly working memory capacity (WMC). This study aimed to examine the relationship between cognitive abilities (cognitive processing speed and WMC) and individual listeners’ responses to digital signal processing settings in adverse listening conditions. A total of 194 native Swedish speakers (83 women and 111 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), with bilateral, symmetrical mild to moderate sensorineural hearing loss who had completed a lexical decision speed test (measuring cognitive processing speed) and semantic word-pair span test (SWPST, capturing WMC) participated in this study. The Hagerman test (capturing speech recognition in noise) was conducted using an experimental hearing aid with three digital signal processing settings: (1) linear amplification without noise reduction (NoP), (2) linear amplification with noise reduction (NR), and (3) non-linear amplification without NR (“fast-acting compression”). The results showed that cognitive processing speed was a better predictor of speech intelligibility in noise, regardless of the types of signal processing algorithms used. That is, there was a stronger association between cognitive processing speed and NR outcomes and fast-acting compression outcomes (in steady state noise). We observed a weaker relationship between working memory and NR, but WMC did not relate to fast-acting compression. WMC was a relatively weaker predictor of speech intelligibility in noise. These findings might have been different if the participants had been provided with training and or allowed to acclimatize to binary masking noise reduction or fast-acting compression.
- Published
- 2016
35. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use
- Author
-
Anja, Gieseler, Maike A S, Tahden, Christiane M, Thiel, Kirsten C, Wagener, Markus, Meis, and Hans, Colonius
- Subjects
cognition ,verbal intelligence ,aging ,otorhinolaryngologic diseases ,Psychology ,hearing impairment ,loudness scaling ,audiogram profile ,speech reception threshold ,Original Research - Abstract
Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age, and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.
- Published
- 2016
36. Multiple Solutions to the Same Problem: Utilization of Plausibility and Syntax in Sentence Comprehension by Older Adults with Impaired Hearing
- Author
-
Alison G. White, Arthur Wingfield, and Nicole M. Amichetti
- Subjects
Hearing loss ,media_common.quotation_subject ,Literal and figurative language ,050105 experimental psychology ,working memory ,03 medical and health sciences ,0302 clinical medicine ,Perception ,medicine ,Psychology ,0501 psychology and cognitive sciences ,Meaning (existential) ,General Psychology ,media_common ,Original Research ,Working memory ,adult aging ,05 social sciences ,hearing impairment ,Syntax ,Linguistics ,Comprehension ,plausibility ,sentence comprehension ,medicine.symptom ,030217 neurology & neurosurgery ,Sentence - Abstract
A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults' comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for successful sentence comprehension, with the existence of these alternative solutions only revealed when literal syntax and plausibility do not coincide.
- Published
- 2016
- Full Text
- View/download PDF
37. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment
- Author
-
Natalie A. Phillips and Jana B. Frtusova
- Subjects
P3 amplitude ,medicine.medical_specialty ,Speech perception ,Auditory visual ,media_common.quotation_subject ,lcsh:BF1-990 ,Audiology ,speech perception ,working memory ,050105 experimental psychology ,Developmental psychology ,03 medical and health sciences ,0302 clinical medicine ,Group differences ,Perception ,medicine ,Psychology ,0501 psychology and cognitive sciences ,General Psychology ,Original Research ,media_common ,Modality (human–computer interaction) ,Working memory ,multisensory interaction ,aging ,05 social sciences ,hearing impairment ,even-related potentials ,lcsh:Psychology ,Facilitation ,030217 neurology & neurosurgery - Abstract
This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.
- Published
- 2016
38. Oral communication in individuals with hearing impairment-considerations regarding attentional, cognitive and social resources
- Author
-
Sigrid Scherpiet and Ulrike Lemke
- Subjects
Working memory ,communication ,cognitive aging ,lcsh:BF1-990 ,Perspective (graphical) ,Social environment ,Cognition ,Context (language use) ,hearing impairment ,Executive functions ,executive functions ,Developmental psychology ,third-party disability ,lcsh:Psychology ,Cognitive hearing science ,speech comprehension ,Perspective ,Psychology ,Active listening ,General Psychology ,Cognitive psychology - Abstract
Traditionally, audiology research has focused primarily on hearing and related disorders. In recent years, however, growing interest and insight has developed into the interaction of hearing and cognition. This applies to a person’s listening and speech comprehension ability and the neural realization thereof. The present perspective extends this view to oral communication, when two or more people interact in social context. Specifically, the impact of hearing impairment and cognitive changes with age is discussed.In focus are executive functions, a group of top-down processes that guide attention, thought and action according to goals and intentions. The strategic allocation of the limited cognitive processing capacity among concurrent tasks is often effortful, especially under adverse communication conditions and in old age. Working memory, a sub-function extensively discussed in cognitive hearing science, is here put into the context of other executive and cognitive functions required for oral communication and speech comprehension. Finally, taking an ecological view on hearing impairment, activity limitations and participation restrictions are discussed regarding their psycho-social impact and third-party disability.
- Published
- 2015
39. Relative clause reading in hearing impairment: Different profiles of syntactic impairment
- Author
-
Naama Friedmann and Ronit Szterman
- Subjects
Homograph ,medicine.medical_specialty ,Hearing loss ,syntactic tree ,media_common.quotation_subject ,Movement ,lcsh:BF1-990 ,Object (grammar) ,syntactic impairment ,Audiology ,Pronunciation ,reading ,Reading (process) ,medicine ,otorhinolaryngologic diseases ,Psychology ,Hebrew ,relative clauses ,Original Research Article ,syntax ,General Psychology ,media_common ,Relative clause ,hearing impairment ,Syntax ,Linguistics ,lcsh:Psychology ,medicine.symptom ,Sentence - Abstract
Children with hearing impairment show difficulties in sentences derived by Wh-movement, such as relative clauses and Wh-questions. This study examines the nature of this deficit in 48 hearing impaired children aged 9-12 years and 38 hearing controls. The task involved reading aloud and paraphrasing of object relatives that include a noun-verb heterophonic homograph. The correct pronunciation of the homograph in these sentences depended upon the correct construction of the syntactic structure of the sentence. An analysis of the reading and paraphrasing of each participant exposed two different patterns of syntactic impairment. Some hearing-impaired children paraphrased the object relatives incorrectly but could still read the homograph, indicating impaired assignment of thematic roles alongside good syntactic structure building; other hearing-impaired children could neither read the homograph nor paraphrase the sentence, indicating a structural deficit in the syntactic tree. Further testing of these children confirmed the different impairments: some are impaired only in Wh-movement, whereas others have CP impairment. The syntactic impairment correlated with whether or not a hearing device was fitted by the age of one year, but not with the type of hearing device or the depth of hearing loss: children who had a hearing device fitted during the first year of life had better syntactic abilities than children whose hearing devices were fitted later.
- Published
- 2014
40. Hearing impairment and audiovisual speech integration ability: a case study report
- Author
-
Daniel Hudock and Nicholas Altieri
- Subjects
medicine.medical_specialty ,speech reading ,Speech recognition ,media_common.quotation_subject ,capacity ,lcsh:BF1-990 ,Speech reading ,audiovisual speech integration ,Sensory system ,hearing impairment ,Audiology ,Sentence recognition ,lcsh:Psychology ,Study report ,Perception ,lip-reading ,medicine ,otorhinolaryngologic diseases ,Psychology ,processing speed ,Audiovisual speech ,Original Research Article ,General Psychology ,media_common - Abstract
Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener’s ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: An open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.
- Published
- 2014
41. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant
- Author
-
Peter J. Blamey, Hamish Innes-Brown, and Jeremy Marozeau
- Subjects
timbre ,Hearing loss ,Speech recognition ,media_common.quotation_subject ,medicine.medical_treatment ,lcsh:BF1-990 ,050105 experimental psychology ,Loudness ,03 medical and health sciences ,0302 clinical medicine ,Perception ,Cochlear implant ,otorhinolaryngologic diseases ,Multidimensional scaling analysis ,medicine ,Psychology ,0501 psychology and cognitive sciences ,Original Research Article ,pitch ,General Psychology ,media_common ,05 social sciences ,cochlear implant ,hearing impairment ,loudness ,humanities ,lcsh:Psychology ,Spectral envelope ,melody segregation ,auditory streaming ,Sound sources ,medicine.symptom ,music training ,Timbre ,030217 neurology & neurosurgery - Abstract
Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
- Published
- 2013
42. Early ERP signature of hearing impairment in visual rhyme judgment
- Author
-
Elisabet eClasson, Mary eRudner, Mikael eJohansson, and Jerker eRönnberg
- Subjects
Medicin och hälsovetenskap ,medicine.medical_specialty ,media_common.quotation_subject ,Speech recognition ,lcsh:BF1-990 ,Audiology ,inter-stimulus interval ,event-related potentials ,Medical and Health Sciences ,Event-related potential ,FP ,medicine ,Psychology ,N400 ,General Psychology ,General Commentary Article ,media_common ,Original Research ,visual rhyme judgment ,Neural correlates of consciousness ,Rhyme ,Contrast (statistics) ,N2 ,Phonology ,hearing impairment ,Interval (music) ,phonology ,lcsh:Psychology ,Orthography - Abstract
Postlingually acquired hearing impairment is associated with changes in the representation of sound in semantic long-term memory. An indication of this is the lower performance on visual rhyme judgment tasks in conditions where phonological and orthographic cues mismatch, requiring high reliance on phonological representations. In this study, event-related potentials (ERPs) were used for the first time to investigate the neural correlates of phonological processing in visual rhyme judgments in participants with acquired hearing impairment (HI) and normal hearing (NH). Rhyme task word pairs rhymed or not and had matching or mismatching orthography. In addition, the interstimulus-interval (ISI) was manipulated to be either long (800 ms) or short (50 ms). Long ISIs allow for engagement of explicit, top-down processes, while short ISIs limit the involvement of such mechanisms. We hypothesized lower behavioural performance and N400 and N2 deviations in HI in the mismatching rhyme judgment conditions, particularly in short ISI. However, the results showed a different pattern. As expected, behavioural performance in the mismatch conditions was lower in HI than in NH in short ISI, but ERPs did not differ across groups. In contrast, HI performed on a par with NH in long ISI. Further, HI, but not NH, showed an amplified N2-like response in the non-rhyming, orthographically mismatching condition in long ISI. This was also the rhyme condition in which participants in both groups benefited the most from the possibility to engage top-down processes afforded with the longer ISI. Taken together, these results indicate an early ERP signature of hearing impairment in this challenging phonological task, likely reflecting use of a compensatory strategy. This strategy is suggested to involve increased reliance on explicit mechanisms such as articulatory recoding and grapheme-to-phoneme conversion.
- Published
- 2012
43. Working Memory Capacity as a Factor Influencing the Relationship between Language Outcome and Rehabilitation in Mandarin-Speaking Preschoolers with Congenital Hearing Impairment.
- Author
-
Lo M and Chen PH
- Abstract
Memory processes could account for a significant part of the variance in language performances of hearing-impaired children. However, the circumstance in which the performance of hearing-impaired children can be nearly the same as the performance of hearing children remains relatively little studied. Thus, a group of pre-school children with congenital, bilateral hearing loss and a group of pre-school children with normal hearing were invited to participate in this study. In addition, the hearing-impaired participants were divided into two groups according to their working memory span. A language disorder assessment test for Mandarin-speaking preschoolers was used to measure the outcomes of receptive and expressive language of the two groups of children. The results showed that the high-span group performed as good as the hearing group, while the low-span group showed lower accuracy than the hearing group. A linear mixed-effects analysis showed that not only length of rehabilitation but also the memory span affected the measure of language outcome. Furthermore, the rehabilitation length positively correlated with the measure of expressive language only among the participants of the high-span group. The pattern of the results indicates that working memory capacity is one of the factors that could support the children to acquire age-equivalent language skills.
- Published
- 2017
- Full Text
- View/download PDF
44. Oral communication in individuals with hearing impairment-considerations regarding attentional, cognitive and social resources.
- Author
-
Lemke U and Scherpiet S
- Abstract
Traditionally, audiology research has focused primarily on hearing and related disorders. In recent years, however, growing interest and insight has developed into the interaction of hearing and cognition. This applies to a person's listening and speech comprehension ability and the neural realization thereof. The present perspective extends this view to oral communication, when two or more people interact in social context. Specifically, the impact of hearing impairment and cognitive changes with age is discussed. In focus are executive functions, a group of top-down processes that guide attention, thought and action according to goals and intentions. The strategic allocation of the limited cognitive processing capacity among concurrent tasks is often effortful, especially under adverse communication conditions and in old age. Working memory, a sub-function extensively discussed in cognitive hearing science, is here put into the context of other executive and cognitive functions required for oral communication and speech comprehension. Finally, taking an ecological view on hearing impairment, activity limitations and participation restrictions are discussed regarding their psycho-social impact and third-party disability.
- Published
- 2015
- Full Text
- View/download PDF
45. Hearing impairment and audiovisual speech integration ability: a case study report.
- Author
-
Altieri N and Hudock D
- Abstract
Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener's ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: an open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.
- Published
- 2014
- Full Text
- View/download PDF
46. Erratum: Early ERP signature of hearing impairment in visual rhyme judgment.
- Author
-
Classon E
- Abstract
[This corrects the article on p. 241 in vol. 4, PMID: 23653613.].
- Published
- 2013
- Full Text
- View/download PDF
47. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.
- Author
-
Marozeau J, Innes-Brown H, and Blamey PJ
- Abstract
Our ability to listen selectively to single sound sources in complex auditory environments is termed "auditory stream segregation."This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
- Published
- 2013
- Full Text
- View/download PDF
48. Early ERP Signature of Hearing Impairment in Visual Rhyme Judgment.
- Author
-
Classon E, Rudner M, Johansson M, and Rönnberg J
- Abstract
Postlingually acquired hearing impairment (HI) is associated with changes in the representation of sound in semantic long-term memory. An indication of this is the lower performance on visual rhyme judgment tasks in conditions where phonological and orthographic cues mismatch, requiring high reliance on phonological representations. In this study, event-related potentials (ERPs) were used for the first time to investigate the neural correlates of phonological processing in visual rhyme judgments in participants with acquired HI and normal hearing (NH). Rhyme task word pairs rhymed or not and had matching or mismatching orthography. In addition, the inter-stimulus interval (ISI) was manipulated to be either long (800 ms) or short (50 ms). Long ISIs allow for engagement of explicit, top-down processes, while short ISIs limit the involvement of such mechanisms. We hypothesized lower behavioral performance and N400 and N2 deviations in HI in the mismatching rhyme judgment conditions, particularly in short ISI. However, the results showed a different pattern. As expected, behavioral performance in the mismatch conditions was lower in HI than in NH in short ISI, but ERPs did not differ across groups. In contrast, HI performed on a par with NH in long ISI. Further, HI, but not NH, showed an amplified N2-like response in the non-rhyming, orthographically mismatching condition in long ISI. This was also the rhyme condition in which participants in both groups benefited the most from the possibility to engage top-down processes afforded with the longer ISI. Taken together, these results indicate an early ERP signature of HI in this challenging phonological task, likely reflecting use of a compensatory strategy. This strategy is suggested to involve increased reliance on explicit mechanisms such as articulatory recoding and grapheme-to-phoneme conversion.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.