66 results on '"Rönnberg J"'
Search Results
2. Reducing the risk of invasive forest pests and pathogens: Combining legislation, targeted management and public awareness
- Author
-
Klapwijk, M.J., Hopkins, A.J.M., Eriksson, L., Pettersson, M., Schroeder, M., Lindelöw, Å., Rönnberg, J., Keskitalo, E.C.H., Kenis, M., Klapwijk, M.J., Hopkins, A.J.M., Eriksson, L., Pettersson, M., Schroeder, M., Lindelöw, Å., Rönnberg, J., Keskitalo, E.C.H., and Kenis, M.
- Abstract
Intensifying global trade will result in increased numbers of plant pest and pathogen species inadvertently being transported along with cargo. This paper examines current mechanisms for prevention and management of potential introductions of forest insect pests and pathogens in the European Union (EU). Current European legislation has not been found sufficient in preventing invasion, establishment and spread of pest and pathogen species within the EU. Costs associated with future invasions are difficult to estimate but past invasions have led to negative economic impacts in the invaded country. The challenge is combining free trade and free movement of products (within the EU) with protection against invasive pests and pathogens. Public awareness may mobilise the public for prevention and detection of potential invasions and, simultaneously, increase support for eradication and control measures. We recommend focus on commodities in addition to pathways, an approach within the EU using a centralised response unit and, critically, to engage the general public in the battle against establishment and spread of these harmful pests and pathogens.
- Published
- 2016
3. Does working memory training improve speech recognition in noise?
- Author
-
Rudner, M., Dahlström, Ö., Skagerstrand, Åsa, Alvinzi, L., Thunberg, Per, Sörqvist, P., Rönnberg, J., Lyxell, B., Möller, Claes, Rudner, M., Dahlström, Ö., Skagerstrand, Åsa, Alvinzi, L., Thunberg, Per, Sörqvist, P., Rönnberg, J., Lyxell, B., and Möller, Claes
- Abstract
Listening to speech in noise is often reported to be effortful, especially for individuals with hearing impairment, and many studies have shown that the ability to recognize speech in noise is positively associated with working memory capacity. We reasoned that if working memory capacity could be increased by training this might improve the ability to recognize speech in noise and modulate the neural activation associated with it. Adults with normal (NH) and impaired hearing (HI) were randomized to five weeks of CogMed QM training followed by five weeks of no training, or vice versa, according to a crossover design. Auditory and cognitive abilities were tested on four occasions: pre-training, T1; after 5 weeks, T2; after 10 weeks, T3 and after a further six months, T4. During fMRI scanning at T1, T2 and T3, the participants listened to stereotyped matrix type sentences in pink noise and competing talker noise at individually adapted 50% and 90% intelligibility levels as well as in quiet. Behavioural results show that although HI had worse auditory abilities than NH, there was no significant difference in cognitive ability, with the exception of phonological processing, which tended to be slower (cf Classon et al. 2013). Performance on most of cognitive tasks improved across sessions, although this could not be specifically attributed to training. We found no consistent pattern of correlations between working memory and the ability to understand speech in noise either before or after training. fMRI results did not reveal any significant effect of training and furthermore there was no significant effect of hearing status. However, there was a significant between group difference in activation of the left temporal gyrus (-44 -23 10) for the contrast speech in pink noise (across intelligibility levels) vs clear speech. There was also an interaction (p < .001 uncorrected) between group and testing occasion in the right superior frontal gyrus (7 58 16) for the contrast
- Published
- 2016
4. Neuropsychological aspects of driving after brain lesion : Simulator study and on-road driving
- Author
-
Lundqvist, A., Alinder, J., Alm, H., Gerdle, B., Levander, S., Rönnberg, J., Lundqvist, A., Alinder, J., Alm, H., Gerdle, B., Levander, S., and Rönnberg, J.
- Published
- 1997
5. Normal-hearing and hearing-impaired subjects' ability to just follow conversation in competing speech, reversed speech, and noise backgrounds
- Author
-
Hygge, Staffan, Rönnberg, J, Larsby, B, Arlinger, S, Hygge, Staffan, Rönnberg, J, Larsby, B, and Arlinger, S
- Abstract
The performance on a conversation-following task by 24 hearing-impaired persons was compared with that of 24 matched controls with normal hearing in the presence of three background noises: (a) speech-spectrum random noise, (b) a male voice, and (c) the male voice played in reverse. The subjects' task was to readjust the sound level of a female voice (signal), every time the signal voice was attenuated, to the subjective level at which it was just possible to understand what was being said. To assess the benefit of lipreading, half of the material was presented audiovisually and half auditorily only. It was predicted that background speech would have a greater masking effect than reversed speech, which would in turn have a lesser masking effect than random noise. It was predicted that hearing-impaired subjects would perform more poorly than the normal-hearing controls in a background of speech. The influence of lipreading was expected to be constant across groups and conditions. The results showed that the hearing-impaired subjects were equally affected by the three background noises and that normal-hearing persons were less affected by the background speech than by noise. The performance of the normal-hearing persons was superior to that of the hearing-impaired subjects. The prediction about lipreading was confirmed. The results were explained in terms of the reduced temporal resolution by the hearing-impaired subjects.
- Published
- 1992
6. Vibratory-coded directional analysis: evaluation of a three-microphone/four-vibrator DSP system.
- Author
-
Borg E, Rönnberg J, and Neovius L
- Abstract
A sound localization aid based on eyeglasses with three microphones and four vibrators was tested in a sound-treated acoustic test room and in an ordinary office. A digital signal-processing algorithm provided a determination of the source angle, which was transformed into eight vibrator codes each corresponding to a 45 degrees sector. The instrument was tested on nine deaf and three deaf-blind individuals. The results show an average hit rate of about 80% in a sound-treated room with 100% for the front 135 degrees sector. The results in a realistic communication situation in an ordinary office room were 70% correct based on single presentations and 95% correct when more realistic criteria for an adequate reaction were used. Ten of the twelve subjects were interested in participating in field tests using a planned miniaturized version. [ABSTRACT FROM AUTHOR]
- Published
- 2001
7. Prospective memory, working memory, retrospective memory and self-rated memory performance in individuals with and without intellectual disability
- Author
-
Levén, A., Lyxell, B., Andersson, J., Danielsson, H., Rönnberg, J., Levén, A., Lyxell, B., Andersson, J., Danielsson, H., and Rönnberg, J.
8. Use of the ‘patient journey’ model in the internet-based pre-fitting counseling of a person with hearing disability: study protocol for a randomized controlled trial
- Author
-
Manchaiah Vinaya KC, Stephens Dafydd, Andersson Gerhard, Rönnberg Jerker, and Lunner Thomas
- Subjects
Hearing loss ,Hearing impairment ,Hearing disability ,Patient journey ,Counseling ,Audiological rehabilitation ,Internet ,Medicine (General) ,R5-920 - Abstract
Abstract Background Hearing impairment is one of the most frequent chronic conditions. Persons with a hearing impairment (PHI) have various experiences during their ‘journey’ through hearing loss. In our previous studies we have developed a ‘patient journey’ model of PHI and their communication partners (CPs). We suggest this model could be useful in internet-based pre-fitting counseling of a person with hearing disability (PHD). Methods/Design A randomized controlled trial (RCT) with waiting list control (WLC) design will be used in this study. One hundred and fifty eight participants with self-reported hearing disability (that is, score >20 in the Hearing Handicap Questionnaire (HHQ)) will be recruited to participate in this study. They will be assigned to one of two groups (79 participants in each group): (1) Information and counseling provision using the ‘patient journey’ model; and (2) WLC. They will participate in a 30 day (4 weeks) internet-based counseling program based on the ‘patient journey’ model. Various outcome measures which focuses on hearing disability, depression and anxiety, readiness to change and acceptance of hearing disability will be administered pre (one week before) and post (one week and six months after) intervention to evaluate the effectiveness of counseling. Discussion Internet-based counseling is being introduced as a viable option for audiological rehabilitation. We predict that the ‘patient journey’ model will have several advantages during counseling of a PHD. Such a program, if proven effective, could yield cost and time-efficient ways of managing hearing disability. Trial registration ClinicalTrials.gov Protocol Registration System NCT01611129.
- Published
- 2013
- Full Text
- View/download PDF
9. Facial mimicry interference reduces working memory accuracy for facial emotion expressions.
- Author
-
Holmer E, Rönnberg J, Asutay E, Tirado C, and Ekberg M
- Subjects
- Humans, Male, Female, Young Adult, Adult, Memory, Short-Term physiology, Facial Expression, Emotions physiology
- Abstract
Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Holmer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
10. Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing.
- Author
-
Moradi S and Rönnberg J
- Abstract
Face-to-face communication is one of the most common means of communication in daily life. We benefit from both auditory and visual speech signals that lead to better language understanding. People prefer face-to-face communication when access to auditory speech cues is limited because of background noise in the surrounding environment or in the case of hearing impairment. We demonstrated that an early, short period of exposure to audiovisual speech stimuli facilitates subsequent auditory processing of speech stimuli for correct identification, but early auditory exposure does not. We called this effect "perceptual doping" as an early audiovisual speech stimulation dopes or recalibrates auditory phonological and lexical maps in the mental lexicon in a way that results in better processing of auditory speech signals for correct identification. This short opinion paper provides an overview of perceptual doping and how it differs from similar auditory perceptual aftereffects following exposure to audiovisual speech materials, its underlying cognitive mechanism, and its potential usefulness in the aural rehabilitation of people with hearing difficulties.
- Published
- 2023
- Full Text
- View/download PDF
11. A structural equation mediation model captures the predictions amongst the parameters of the ease of language understanding model.
- Author
-
Homman L, Danielsson H, and Rönnberg J
- Abstract
Objective: The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database., Methods: Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed., Results: Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions., Conclusion: The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2023 Homman, Danielsson and Rönnberg.)
- Published
- 2023
- Full Text
- View/download PDF
12. Editorial: Cognitive hearing science: Investigating the relationship between selective attention and brain activity.
- Author
-
Rönnberg J, Sharma A, Signoret C, Campbell TA, and Sörqvist P
- Abstract
Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
- Published
- 2022
- Full Text
- View/download PDF
13. Aberrant resting-state connectivity of auditory, ventral attention/salience and default-mode networks in adults with attention deficit hyperactivity disorder.
- Author
-
Blomberg R, Signoret C, Danielsson H, Perini I, Rönnberg J, and Capusan AJ
- Abstract
Background: Numerous resting-state studies on attention deficit hyperactivity disorder (ADHD) have reported aberrant functional connectivity (FC) between the default-mode network (DMN) and the ventral attention/salience network (VA/SN). This finding has commonly been interpreted as an index of poorer DMN regulation associated with symptoms of mind wandering in ADHD literature. However, a competing perspective suggests that dysfunctional organization of the DMN and VA/SN may additionally index increased sensitivity to the external environment. The goal of the current study was to test this latter perspective in relation to auditory distraction by investigating whether ADHD-adults exhibit aberrant FC between DMN, VA/SN, and auditory networks., Methods: Twelve minutes of resting-state fMRI data was collected from two adult groups: ADHD ( n = 17) and controls ( n = 17); from which the FC between predefined regions comprising the DMN, VA/SN, and auditory networks were analyzed., Results: A weaker anticorrelation between the VA/SN and DMN was observed in ADHD. DMN and VA/SN hubs also exhibited aberrant FC with the auditory network in ADHD. Additionally, participants who displayed a stronger anticorrelation between the VA/SN and auditory network at rest, also performed better on a cognitively demanding behavioral task that involved ignoring a distracting auditory stimulus., Conclusion: Results are consistent with the hypothesis that auditory distraction in ADHD is linked to aberrant interactions between DMN, VA/SN, and auditory systems. Our findings support models that implicate dysfunctional organization of the DMN and VA/SN in the disorder and encourage more research into sensory interactions with these major networks., Competing Interests: AC had received speaker’s fees, and/or scientific advisory board compensation from Lundbeck, Indivior, DNE Pharma, and Camurus, all outside the scope of the current project. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Blomberg, Signoret, Danielsson, Perini, Rönnberg and Capusan.)
- Published
- 2022
- Full Text
- View/download PDF
14. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model.
- Author
-
Rönnberg J, Signoret C, Andin J, and Holmer E
- Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Rönnberg, Signoret, Andin and Holmer.)
- Published
- 2022
- Full Text
- View/download PDF
15. A decrease in physiological arousal accompanied by stable behavioral performance reflects task habituation.
- Author
-
Micula A, Rönnberg J, Zhang Y, and Ng EHN
- Abstract
Despite the evidence of a positive relationship between task demands and listening effort, the Framework for Understanding Effortful Listening (FUEL) highlights the important role of arousal on an individual's choice to engage in challenging listening tasks. Previous studies have interpreted physiological responses in conjunction with behavioral responses as markers of task engagement. The aim of the current study was to investigate the effect of potential changes in physiological arousal, indexed by the pupil baseline, on task engagement over the course of an auditory recall test. Furthermore, the aim was to investigate whether working memory (WM) capacity and the signal-to-noise ratio (SNR) at which the test was conducted had an effect on changes in arousal. Twenty-one adult hearing aid users with mild to moderately severe symmetrical sensorineural hearing loss were included. The pupil baseline was measured during the Sentence-final Word Identification and Recall (SWIR) test, which was administered in a background noise composed of sixteen talkers. The Reading Span (RS) test was used as a measure of WM capacity. The findings showed that the pupil baseline decreased over the course of the SWIR test. However, recall performance remained stable, indicating that the participants maintained the necessary engagement level required to perform the task. These findings were interpreted as a decline in arousal as a result of task habituation. There was no effect of WM capacity or individual SNR level on the change in pupil baseline over time. A significant interaction was found between WM capacity and SNR level on the overall mean pupil baseline. Individuals with higher WM capacity exhibited an overall larger mean pupil baseline at low SNR levels compared to individuals with poorer WM capacity. This may be related to the ability of individuals with higher WM capacity to perform better than individual with poorer WM capacity in challenging listening conditions., Competing Interests: EN and AM were employed by the Oticon A/S. YZ was employed by Oticon Medical. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Micula, Rönnberg, Zhang and Ng.)
- Published
- 2022
- Full Text
- View/download PDF
16. A Glimpse of Memory Through the Eyes: Pupillary Responses Measured During Encoding Reflect the Likelihood of Subsequent Memory Recall in an Auditory Free Recall Test.
- Author
-
Micula A, Rönnberg J, Książek P, Murmu Nielsen R, Wendt D, Fiedler L, and Ng EHN
- Subjects
- Humans, Mental Recall physiology, Memory, Short-Term, Speech Perception physiology, Hearing Loss, Sensorineural diagnosis, Hearing Aids
- Abstract
The aim of the current study was to investigate whether task-evoked pupillary responses measured during encoding, individual working memory capacity and noise reduction in hearing aids were associated with the likelihood of subsequently recalling an item in an auditory free recall test combined with pupillometry. Participants with mild to moderately severe symmetrical sensorineural hearing loss (n = 21) were included. The Sentence-final Word Identification and Recall (SWIR) test was administered in a background noise composed of sixteen talkers with noise reduction in hearing aids activated and deactivated. The task-evoked peak pupil dilation (PPD) was measured. The Reading Span (RS) test was used as a measure of individual working memory capacity. Larger PPD at a single trial level was significantly associated with higher likelihood of subsequently recalling a word, presumably reflecting the intensity of attention devoted during encoding. There was no clear evidence of a significant relationship between working memory capacity and subsequent memory recall, which may be attributed to the SWIR test and RS test being administered in different modalities, as well as differences in task characteristics. Noise reduction did not have a significant effect on subsequent memory recall. This may be due to the background noise not having a detrimental effect on attentional processing at the favorable signal-to-noise ratio levels at which the test was conducted.
- Published
- 2022
- Full Text
- View/download PDF
17. The Effects of Working Memory Load on Auditory Distraction in Adults With Attention Deficit Hyperactivity Disorder.
- Author
-
Blomberg R, Johansson Capusan A, Signoret C, Danielsson H, and Rönnberg J
- Abstract
Cognitive control provides us with the ability to inter alia , regulate the locus of attention and ignore environmental distractions in accordance with our goals. Auditory distraction is a frequently cited symptom in adults with attention deficit hyperactivity disorder (aADHD)-yet few task-based fMRI studies have explored whether deficits in cognitive control (associated with the disorder) impedes on the ability to suppress/compensate for exogenously evoked cortical responses to noise in this population. In the current study, we explored the effects of auditory distraction as function of working memory (WM) load. Participants completed two tasks: an auditory target detection (ATD) task in which the goal was to actively detect salient oddball tones amidst a stream of standard tones in noise, and a visual n -back task consisting of 0-, 1-, and 2-back WM conditions whilst concurrently ignoring the same tonal signal from the ATD task. Results indicated that our sample of young aADHD ( n = 17), compared to typically developed controls ( n = 17), had difficulty attenuating auditory cortical responses to the task-irrelevant sound when WM demands were high (2-back). Heightened auditory activity to task-irrelevant sound was associated with both poorer WM performance and symptomatic inattentiveness. In the ATD task, we observed a significant increase in functional communications between auditory and salience networks in aADHD. Because performance outcomes were on par with controls for this task, we suggest that this increased functional connectivity in aADHD was likely an adaptive mechanism for suboptimal listening conditions. Taken together, our results indicate that aADHD are more susceptible to noise interference when they are engaged in a primary task. The ability to cope with auditory distraction appears to be related to the WM demands of the task and thus the capacity to deploy cognitive control., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 Blomberg, Johansson Capusan, Signoret, Danielsson and Rönnberg.)
- Published
- 2021
- Full Text
- View/download PDF
18. The Influence of Form- and Meaning-Based Predictions on Cortical Speech Processing Under Challenging Listening Conditions: A MEG Study.
- Author
-
Signoret C, Andersen LM, Dahlström Ö, Blomberg R, Lundqvist D, Rudner M, and Rönnberg J
- Abstract
Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration., (Copyright © 2020 Signoret, Andersen, Dahlström, Blomberg, Lundqvist, Rudner and Rönnberg.)
- Published
- 2020
- Full Text
- View/download PDF
19. The tree species matters: Biodiversity and ecosystem service implications of replacing Scots pine production stands with Norway spruce.
- Author
-
Felton A, Petersson L, Nilsson O, Witzell J, Cleary M, Felton AM, Björkman C, Sang ÅO, Jonsell M, Holmström E, Nilsson U, Rönnberg J, Kalén C, and Lindbladh M
- Subjects
- Biodiversity, Ecosystem, Forests, Norway, Sweden, Trees, Picea, Pinus sylvestris
- Abstract
The choice of tree species used in production forests matters for biodiversity and ecosystem services. In Sweden, damage to young production forests by large browsing herbivores is helping to drive a development where sites traditionally regenerated with Scots pine (Pinus sylvestris) are instead being regenerated with Norway spruce (Picea abies). We provide a condensed synthesis of the available evidence regarding the likely resultant implications for forest biodiversity and ecosystem services from this change in tree species. Apart from some benefits (e.g. reduced stand-level browsing damage), we identified a range of negative outcomes for biodiversity, production, esthetic and recreational values, as well as increased stand vulnerability to storm, frost, and drought damage, and potentially higher risks of pest and pathogen outbreak. Our results are directly relevant to forest owners and policy-makers seeking information regarding the uncertainties, risks, and trade-offs likely to result from changing the tree species in production forests.
- Published
- 2020
- Full Text
- View/download PDF
20. Neural Networks Supporting Phoneme Monitoring Are Modulated by Phonology but Not Lexicality or Iconicity: Evidence From British and Swedish Sign Language.
- Author
-
Rudner M, Orfanidou E, Kästner L, Cardin V, Woll B, Capek CM, and Rönnberg J
- Abstract
Sign languages are natural languages in the visual domain. Because they lack a written form, they provide a sharper tool than spoken languages for investigating lexicality effects which may be confounded by orthographic processing. In a previous study, we showed that the neural networks supporting phoneme monitoring in deaf British Sign Language (BSL) users are modulated by phonology but not lexicality or iconicity. In the present study, we investigated whether this pattern generalizes to deaf Swedish Sign Language (SSL) users. British and SSLs have a largely overlapping phoneme inventory but are mutually unintelligible because lexical overlap is small. This is important because it means that even when signs lexicalized in BSL are unintelligible to users of SSL they are usually still phonologically acceptable. During fMRI scanning, deaf users of the two different sign languages monitored signs that were lexicalized in either one or both of those languages for phonologically contrastive elements. Neural activation patterns relating to different linguistic levels of processing were similar across SLs; in particular, we found no effect of lexicality, supporting the notion that apparent lexicality effects on sublexical processing of speech may be driven by orthographic strategies. As expected, we found an effect of phonology but not iconicity. Further, there was a difference in neural activation between the two groups in a motion-processing region of the left occipital cortex, possibly driven by cultural differences, such as education. Importantly, this difference was not modulated by the linguistic characteristics of the material, underscoring the robustness of the neural activation patterns relating to different linguistic levels of processing., (Copyright © 2019 Rudner, Orfanidou, Kästner, Cardin, Woll, Capek and Rönnberg.)
- Published
- 2019
- Full Text
- View/download PDF
21. Speech Processing Difficulties in Attention Deficit Hyperactivity Disorder.
- Author
-
Blomberg R, Danielsson H, Rudner M, Söderlund GBW, and Rönnberg J
- Abstract
The large body of research that forms the ease of language understanding (ELU) model emphasizes the important contribution of cognitive processes when listening to speech in adverse conditions; however, speech-in-noise (SIN) processing is yet to be thoroughly tested in populations with cognitive deficits. The purpose of the current study was to contribute to the field in this regard by assessing SIN performance in a sample of adolescents with attention deficit hyperactivity disorder (ADHD) and comparing results with age-matched controls. This population was chosen because core symptoms of ADHD include developmental deficits in cognitive control and working memory capacity and because these top-down processes are thought to reach maturity during adolescence in individuals with typical development. The study utilized natural language sentence materials under experimental conditions that manipulated the dependency on cognitive mechanisms in varying degrees. In addition, participants were tested on cognitive capacity measures of complex working memory-span, selective attention, and lexical access. Primary findings were in support of the ELU-model. Age was shown to significantly covary with SIN performance, and after controlling for age, ADHD participants demonstrated greater difficulty than controls with the experimental manipulations. In addition, overall SIN performance was strongly predicted by individual differences in cognitive capacity. Taken together, the results highlight the general disadvantage persons with deficient cognitive capacity have when attending to speech in typically noisy listening environments. Furthermore, the consistently poorer performance observed in the ADHD group suggests that auditory processing tasks designed to tax attention and working memory capacity may prove to be beneficial clinical instruments when diagnosing ADHD.
- Published
- 2019
- Full Text
- View/download PDF
22. Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss.
- Author
-
Rudner M, Danielsson H, Lyxell B, Lunner T, and Rönnberg J
- Abstract
Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.
- Published
- 2019
- Full Text
- View/download PDF
23. fMRI Evidence of Magnitude Manipulation during Numerical Order Processing in Congenitally Deaf Signers.
- Author
-
Andin J, Fransson P, Rönnberg J, and Rudner M
- Subjects
- Adult, Brain physiopathology, Deafness congenital, Deafness physiopathology, Female, Functional Laterality physiology, Humans, Magnetic Resonance Imaging, Male, Nerve Net physiopathology, Sign Language, Young Adult, Brain diagnostic imaging, Deafness diagnostic imaging, Nerve Net diagnostic imaging
- Abstract
Congenital deafness is often compensated by early sign language use leading to typical language development with corresponding neural underpinnings. However, deaf individuals are frequently reported to have poorer numerical abilities than hearing individuals and it is not known whether the underlying neuronal networks differ between groups. In the present study, adult deaf signers and hearing nonsigners performed a digit and letter order tasks, during functional magnetic resonance imaging. We found the neuronal networks recruited in the two tasks to be generally similar across groups, with significant activation in the dorsal visual stream for the letter order task, suggesting letter identification and position encoding. For the digit order task, no significant activation was found for either of the two groups. Region of interest analyses on parietal numerical processing regions revealed different patterns of activation across groups. Importantly, deaf signers showed significant activation in the right horizontal portion of the intraparietal sulcus for the digit order task, suggesting engagement of magnitude manipulation during numerical order processing in this group.
- Published
- 2018
- Full Text
- View/download PDF
24. The Organization of Working Memory Networks is Shaped by Early Sensory Experience.
- Author
-
Cardin V, Rudner M, De Oliveira RF, Andin J, Su MT, Beese L, Woll B, and Rönnberg J
- Subjects
- Adult, Deafness diagnostic imaging, Female, Humans, Language Development, Magnetic Resonance Imaging, Male, Middle Aged, Nerve Net diagnostic imaging, Neuronal Plasticity physiology, Psycholinguistics, Reaction Time physiology, Sign Language, Young Adult, Deafness physiopathology, Hearing physiology, Memory, Short-Term physiology, Nerve Net physiopathology
- Abstract
Early deafness results in crossmodal reorganization of the superior temporal cortex (STC). Here, we investigated the effect of deafness on cognitive processing. Specifically, we studied the reorganization, due to deafness and sign language (SL) knowledge, of linguistic and nonlinguistic visual working memory (WM). We conducted an fMRI experiment in groups that differed in their hearing status and SL knowledge: deaf native signers, and hearing native signers, hearing nonsigners. Participants performed a 2-back WM task and a control task. Stimuli were signs from British Sign Language (BSL) or moving nonsense objects in the form of point-light displays. We found characteristic WM activations in fronto-parietal regions in all groups. However, deaf participants also recruited bilateral posterior STC during the WM task, independently of the linguistic content of the stimuli, and showed less activation in fronto-parietal regions. Resting-state connectivity analysis showed increased connectivity between frontal regions and STC in deaf compared to hearing individuals. WM for signs did not elicit differential activations, suggesting that SL WM does not rely on modality-specific linguistic processing. These findings suggest that WM networks are reorganized due to early deafness, and that the organization of cognitive networks is shaped by the nature of the sensory inputs available during development.
- Published
- 2018
- Full Text
- View/download PDF
25. Editorial: Cognitive Hearing Mechanisms of Language Understanding: Short- and Long-Term Perspectives.
- Author
-
Ellis RJ, Sörqvist P, Zekveld AA, and Rönnberg J
- Published
- 2017
- Full Text
- View/download PDF
26. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users.
- Author
-
Moradi S, Wahlin A, Hällgren M, Rönnberg J, and Lidestam B
- Abstract
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed., Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.
- Published
- 2017
- Full Text
- View/download PDF
27. Spectrotemporal Modulation Sensitivity as a Predictor of Speech-Reception Performance in Noise With Hearing Aids.
- Author
-
Bernstein JG, Danielsson H, Hällgren M, Stenfelt S, Rönnberg J, and Lunner T
- Subjects
- Hearing Loss, Sensorineural, Humans, Speech, Auditory Threshold, Hearing Aids, Noise, Speech Perception
- Abstract
The audiogram predicts <30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function-spectrotemporal modulation (STM) sensitivity-and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2-6 kHz) pure-tone average (HFA; R
2 = .31) and STM sensitivity (R2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (ΔR2 = .13; total R2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listeners < 65 years old or with HFA <53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies <2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise., (© The Author(s) 2016.)- Published
- 2016
- Full Text
- View/download PDF
28. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.
- Author
-
Moradi S, Lidestam B, and Rönnberg J
- Subjects
- Aged, Hearing Tests, Humans, Speech, Cues, Hearing Aids, Speech Perception
- Abstract
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context., (© The Author(s) 2016.)
- Published
- 2016
- Full Text
- View/download PDF
29. Student's Second-Language Grade May Depend on Classroom Listening Position.
- Author
-
Hurtig A, Sörqvist P, Ljung R, Hygge S, and Rönnberg J
- Subjects
- Adolescent, Distance Perception physiology, Female, Humans, Language Tests, Male, Students, Sweden, Aptitude physiology, Auditory Perception physiology, Comprehension physiology, Multilingualism
- Abstract
The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students' score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants' baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores-and hence students' grade in English-may depend on students' classroom listening position.
- Published
- 2016
- Full Text
- View/download PDF
30. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.
- Author
-
Sörqvist P, Dahlström Ö, Karlsson T, and Rönnberg J
- Abstract
Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.
- Published
- 2016
- Full Text
- View/download PDF
31. Reducing the risk of invasive forest pests and pathogens: Combining legislation, targeted management and public awareness.
- Author
-
Klapwijk MJ, Hopkins AJ, Eriksson L, Pettersson M, Schroeder M, Lindelöw Å, Rönnberg J, Keskitalo EC, and Kenis M
- Subjects
- Commerce legislation & jurisprudence, Ecosystem, European Union, Forestry methods, Forests, Risk, Forestry legislation & jurisprudence, Introduced Species legislation & jurisprudence
- Abstract
Intensifying global trade will result in increased numbers of plant pest and pathogen species inadvertently being transported along with cargo. This paper examines current mechanisms for prevention and management of potential introductions of forest insect pests and pathogens in the European Union (EU). Current European legislation has not been found sufficient in preventing invasion, establishment and spread of pest and pathogen species within the EU. Costs associated with future invasions are difficult to estimate but past invasions have led to negative economic impacts in the invaded country. The challenge is combining free trade and free movement of products (within the EU) with protection against invasive pests and pathogens. Public awareness may mobilise the public for prevention and detection of potential invasions and, simultaneously, increase support for eradication and control measures. We recommend focus on commodities in addition to pathways, an approach within the EU using a centralised response unit and, critically, to engage the general public in the battle against establishment and spread of these harmful pests and pathogens.
- Published
- 2016
- Full Text
- View/download PDF
32. Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones.
- Author
-
Cardin V, Orfanidou E, Kästner L, Rönnberg J, Woll B, Capek CM, and Rudner M
- Subjects
- Adult, Analysis of Variance, Cerebral Cortex blood supply, Cues, Deafness pathology, Female, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Middle Aged, Oxygen blood, Photic Stimulation, Psychoacoustics, Reaction Time physiology, Semantics, Brain Mapping, Cerebral Cortex physiopathology, Perception physiology, Phonetics
- Abstract
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.
- Published
- 2016
- Full Text
- View/download PDF
33. Risk of False Positives during Sampling for Heterobasidion annosum s.l.
- Author
-
Gunulf Åberg A, Witzell J, and Rönnberg J
- Abstract
A standard method to detect infection by Heterobasidion annosum sensu lato (s.l.) in stumps or stems is to cut a disc and examine it under a microscope. Concerns have been raised that spores can be transferred from the bark to the cut surface, thus contaminating the sample. The aims of this study were to test whether viable basidiospores of H. annosum s.l. can be transferred from the bark onto disc surfaces by a chainsaw and to investigate the impacts of different sampling procedures on the extent of contaminations. Logs were cut with or without adding basidiospores to the bark prior to the cut. Infection measurements were significantly greater for discs with treated bark (100% infected, infection coverage 40 cm
2 dm-2 of disc area) compared with control discs (47% infected, infection coverage 0.2 to 0.3 cm2 dm-2 ). In addition, trees were sampled under authentic field conditions using different procedures. The infection measurements differed significantly depending on the procedure; sampling involving debarking or disinfection of the bark with 70% ethanol prior to cutting had lower measurements (6 to 19% and 13% infected, respectively) compared with leaving the bark on untreated (63 to 75% infected). Consideration of the contamination risk is warranted when evaluating the results of earlier studies and when planning new experiments.- Published
- 2016
- Full Text
- View/download PDF
34. Subjective ratings of masker disturbance during the perception of native and non-native speech.
- Author
-
Kilman L, Zekveld AA, Hällgren M, and Rönnberg J
- Abstract
The aim of the present study was to address how 43 normal-hearing (NH) and hearing-impaired (HI) listeners subjectively experienced the disturbance generated by four masker conditions (i.e., stationary noise, fluctuating noise, Swedish two-talker babble and English two-talker babble) while listening to speech in two target languages, i.e., Swedish (native) or English (non-native). The participants were asked to evaluate their noise-disturbance experience on a continuous scale from 0 to 10 immediately after having performed each listening condition. The data demonstrated a three-way interaction effect between target language, masker condition, and group (HI versus NH). The HI listeners experienced the Swedish-babble masker as significantly more disturbing for the native target language (Swedish) than for the non-native language (English). Additionally, this masker was significantly more disturbing than each of the other masker types during the perception of Swedish target speech. The NH listeners, on the other hand, indicated that the Swedish speech-masker was more disturbing than the stationary and the fluctuating noise-maskers for the perception of English target speech. The NH listeners perceived more disturbance from the speech maskers than the noise maskers. The HI listeners did not perceive the speech maskers as generally more disturbing than the noise maskers. However, they had particular difficulty with the perception of native speech masked by native babble, a common condition in daily-life listening conditions. These results suggest that the characteristics of the different maskers applied in the current study seem to affect the perceived disturbance differently in HI and NH listeners. There was no general difference in the perceived disturbance across conditions between the HI listeners and the NH listeners.
- Published
- 2015
- Full Text
- View/download PDF
35. How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions?
- Author
-
Ellis RJ and Rönnberg J
- Abstract
Proactive interference (PI) is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility.
- Published
- 2015
- Full Text
- View/download PDF
36. Stages of Change Profiles among Adults Experiencing Hearing Difficulties Who Have Not Taken Any Action: A Cross-Sectional Study.
- Author
-
Manchaiah V, Rönnberg J, Andersson G, and Lunner T
- Subjects
- Adult, Cross-Sectional Studies, Decision Making, Female, Humans, Male, Middle Aged, Self Report, United Kingdom epidemiology, Hearing Loss epidemiology
- Abstract
The aim of the current study was to test the hypothesis that adults experiencing hearing difficulties who are aware of their difficulties but have not taken any action would fall under contemplation and preparation stages based on the transtheoretical stages-of-change model. The study employed a cross-sectional design. The study was conducted in United Kingdom and 90 participants completed University of Rhode Island Change Assessment (URICA) scale as well as measures of self-reported hearing disability, self-reported anxiety and depression, self-reported hearing disability acceptance, and provided additional demographic details online. As predicted, the results indicate that a high percentage of participants (over 90%) were in the contemplation and preparation stages. No statistically significant differences were observed among groups of stage with highest URICA scores and factors such as: years since hearing disability, self-reported hearing disability, self-reported anxiety and depression, and self-reported hearing disability acceptance. Cluster analysis identified three stages-of-change clusters, which were named as: decision making (53% of sample), participation (28% of sample), and disinterest (19% of sample). Study results support the stages-of-change model. In addition, implications of the current study and areas for future research are discussed.
- Published
- 2015
- Full Text
- View/download PDF
37. Native and Non-native Speech Perception by Hearing-Impaired Listeners in Noise- and Speech Maskers.
- Author
-
Kilman L, Zekveld A, Hällgren M, and Rönnberg J
- Subjects
- Acoustic Stimulation, Adult, Aged, Audiometry, Pure-Tone, Auditory Threshold, Cues, Female, Humans, Male, Middle Aged, Phonetics, Semantics, Speech Reception Threshold Test, Sweden, Language, Noise adverse effects, Perceptual Masking, Persons With Hearing Impairments psychology, Speech Perception
- Abstract
This study evaluated how hearing-impaired listeners perceive native (Swedish) and nonnative (English) speech in the presence of noise- and speech maskers. Speech reception thresholds were measured for four different masker types for each target language. The maskers consisted of stationary and fluctuating noise and two-talker babble in Swedish and English. Twenty-three hearing-impaired native Swedish listeners participated, aged between 28 and 65 years. The participants also performed cognitive tests of working memory capacity in Swedish and English, nonverbal reasoning, and an English proficiency test. Results indicated that the speech maskers were more interfering than the noise maskers in both target languages. The larger need for phonetic and semantic cues in a nonnative language makes a stationary masker relatively more challenging than a fluctuating-noise masker. Better hearing acuity (pure tone average) was associated with better perception of the target speech in Swedish, and better English proficiency was associated with better speech perception in English. Larger working memory and better pure tone averages were related to the better perception of speech masked with fluctuating noise in the nonnative language. This suggests that both are relevant in highly taxing conditions. A large variance in performance between the listeners was observed, especially for speech perception in the nonnative language., (© The Author(s) 2015.)
- Published
- 2015
- Full Text
- View/download PDF
38. The effect of functional hearing loss and age on long- and short-term visuospatial memory: evidence from the UK biobank resource.
- Author
-
Rönnberg J, Hygge S, Keidser G, and Rudner M
- Abstract
The UK Biobank offers cross-sectional epidemiological data collected on >500,000 individuals in the UK between 40 and 70 years of age. Using the UK Biobank data, the aim of this study was to investigate the effects of functional hearing loss and hearing aid usage on visuospatial memory function. This selection of variables resulted in a sub-sample of 138,098 participants after discarding extreme values. A digit triplets functional hearing test was used to divide the participants into three groups: poor, insufficient and normal hearers. We found negative relationships between functional hearing loss and both visuospatial working memory (i.e., a card pair matching task) and visuospatial, episodic long-term memory (i.e., a prospective memory task), with the strongest association for episodic long-term memory. The use of hearing aids showed a small positive effect for working memory performance for the poor hearers, but did not have any influence on episodic long-term memory. Age also showed strong main effects for both memory tasks and interacted with gender and education for the long-term memory task. Broader theoretical implications based on a memory systems approach will be discussed and compared to theoretical alternatives.
- Published
- 2014
- Full Text
- View/download PDF
39. Dynamic relation between working memory capacity and speech recognition in noise during the first 6 months of hearing aid use.
- Author
-
Ng EH, Classon E, Larsby B, Arlinger S, Lunner T, Rudner M, and Rönnberg J
- Subjects
- Acoustic Stimulation, Aged, Aged, 80 and over, Audiometry, Pure-Tone, Auditory Threshold, Cognition, Equipment Design, Female, Hearing Loss, Sensorineural diagnosis, Hearing Loss, Sensorineural psychology, Humans, Male, Middle Aged, Neuropsychological Tests, Persons With Hearing Impairments psychology, Severity of Illness Index, Speech Reception Threshold Test, Time Factors, Correction of Hearing Impairment instrumentation, Hearing Aids, Hearing Loss, Sensorineural rehabilitation, Memory, Short-Term, Noise adverse effects, Perceptual Masking, Persons With Hearing Impairments rehabilitation, Recognition, Psychology, Speech Perception
- Abstract
The present study aimed to investigate the changing relationship between aided speech recognition and cognitive function during the first 6 months of hearing aid use. Twenty-seven first-time hearing aid users with symmetrical mild to moderate sensorineural hearing loss were recruited. Aided speech recognition thresholds in noise were obtained in the hearing aid fitting session as well as at 3 and 6 months postfitting. Cognitive abilities were assessed using a reading span test, which is a measure of working memory capacity, and a cognitive test battery. Results showed a significant correlation between reading span and speech reception threshold during the hearing aid fitting session. This relation was significantly weakened over the first 6 months of hearing aid use. Multiple regression analysis showed that reading span was the main predictor of speech recognition thresholds in noise when hearing aids were first fitted, but that the pure-tone average hearing threshold was the main predictor 6 months later. One way of explaining the results is that working memory capacity plays a more important role in speech recognition in noise initially rather than after 6 months of use. We propose that new hearing aid users engage working memory capacity to recognize unfamiliar processed speech signals because the phonological form of these signals cannot be automatically matched to phonological representations in long-term memory. As familiarization proceeds, the mismatch effect is alleviated, and the engagement of working memory capacity is reduced., (© The Author(s) 2014.)
- Published
- 2014
- Full Text
- View/download PDF
40. Gated auditory speech perception in elderly hearing aid users and elderly normal-hearing individuals: effects of hearing impairment and cognitive capacity.
- Author
-
Moradi S, Lidestam B, Hällgren M, and Rönnberg J
- Subjects
- Acoustic Stimulation, Age Factors, Aged, Audiometry, Pure-Tone, Audiometry, Speech, Auditory Threshold, Female, Hearing Loss, Bilateral diagnosis, Hearing Loss, Bilateral psychology, Hearing Loss, Sensorineural diagnosis, Hearing Loss, Sensorineural psychology, Humans, Male, Memory, Middle Aged, Persons With Hearing Impairments psychology, Recognition, Psychology, Speech Acoustics, Aging psychology, Cognition, Correction of Hearing Impairment instrumentation, Hearing Aids, Hearing Loss, Bilateral rehabilitation, Hearing Loss, Sensorineural rehabilitation, Persons With Hearing Impairments rehabilitation, Speech Perception
- Abstract
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context., (© The Author(s) 2014.)
- Published
- 2014
- Full Text
- View/download PDF
41. The influence of non-native language proficiency on speech perception performance.
- Author
-
Kilman L, Zekveld A, Hällgren M, and Rönnberg J
- Abstract
The present study examined to what extent proficiency in a non-native language influences speech perception in noise. We explored how English proficiency affected native (Swedish) and non-native (English) speech perception in four speech reception threshold (SRT) conditions, including two energetic (stationary, fluctuating noise) and two informational (two-talker babble Swedish, two-talker babble English) maskers. Twenty-three normal-hearing native Swedish listeners participated, age between 28 and 64 years. The participants also performed standardized tests in English proficiency, non-verbal reasoning and working memory capacity. Our approach with focus on proficiency and the assessment of external as well as internal, listener-related factors allowed us to examine which variables explained intra- and interindividual differences in native and non-native speech perception performance. The main result was that in the non-native target, the level of English proficiency is a decisive factor for speech intelligibility in noise. High English proficiency improved performance in all four conditions when the target language was English. The informational maskers were interfering more with perception than energetic maskers, specifically in the non-native target. The study also confirmed that the SRT's were better when target language was native compared to non-native.
- Published
- 2014
- Full Text
- View/download PDF
42. Gated auditory speech perception: effects of listening conditions and cognitive capacity.
- Author
-
Moradi S, Lidestam B, Saremi A, and Rönnberg J
- Abstract
This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.
- Published
- 2014
- Full Text
- View/download PDF
43. Cognitive spare capacity in older adults with hearing loss.
- Author
-
Mishra S, Stenfelt S, Lunner T, Rönnberg J, and Rudner M
- Abstract
Individual differences in working memory capacity (WMC) are associated with speech recognition in adverse conditions, reflecting the need to maintain and process speech fragments until lexical access can be achieved. When working memory resources are engaged in unlocking the lexicon, there is less Cognitive Spare Capacity (CSC) available for higher level processing of speech. CSC is essential for interpreting the linguistic content of speech input and preparing an appropriate response, that is, engaging in conversation. Previously, we showed, using a Cognitive Spare Capacity Test (CSCT) that in young adults with normal hearing, CSC was not generally related to WMC and that when CSC decreased in noise it could be restored by visual cues. In the present study, we investigated CSC in 24 older adults with age-related hearing loss, by administering the CSCT and a battery of cognitive tests. We found generally reduced CSC in older adults with hearing loss compared to the younger group in our previous study, probably because they had poorer cognitive skills and deployed them differently. Importantly, CSC was not reduced in the older group when listening conditions were optimal. Visual cues improved CSC more for this group than for the younger group in our previous study. CSC of older adults with hearing loss was not generally related to WMC but it was consistently related to episodic long term memory, suggesting that the efficiency of this processing bottleneck is important for executive processing of speech in this group.
- Published
- 2014
- Full Text
- View/download PDF
44. Cognitive processing load during listening is reduced more by decreasing voice similarity than by increasing spatial separation between target and masker speech.
- Author
-
Zekveld AA, Rudner M, Kramer SE, Lyzenga J, and Rönnberg J
- Abstract
We investigated changes in speech recognition and cognitive processing load due to the masking release attributable to decreasing similarity between target and masker speech. This was achieved by using masker voices with either the same (female) gender as the target speech or different gender (male) and/or by spatially separating the target and masker speech using HRTFs. We assessed the relation between the signal-to-noise ratio required for 50% sentence intelligibility, the pupil response and cognitive abilities. We hypothesized that the pupil response, a measure of cognitive processing load, would be larger for co-located maskers and for same-gender compared to different-gender maskers. We further expected that better cognitive abilities would be associated with better speech perception and larger pupil responses as the allocation of larger capacity may result in more intense mental processing. In line with previous studies, the performance benefit from different-gender compared to same-gender maskers was larger for co-located masker signals. The performance benefit of spatially-separated maskers was larger for same-gender maskers. The pupil response was larger for same-gender than for different-gender maskers, but was not reduced by spatial separation. We observed associations between better perception performance and better working memory, better information updating, and better executive abilities when applying no corrections for multiple comparisons. The pupil response was not associated with cognitive abilities. Thus, although both gender and location differences between target and masker facilitate speech perception, only gender differences lower cognitive processing load. Presenting a more dissimilar masker may facilitate target-masker separation at a later (cognitive) processing stage than increasing the spatial separation between the target and masker. The pupil response provides information about speech perception that complements intelligibility data.
- Published
- 2014
- Full Text
- View/download PDF
45. Use of the 'patient journey' model in the internet-based pre-fitting counseling of a person with hearing disability: lessons from a failed clinical trial.
- Author
-
Manchaiah V, Rönnberg J, Andersson G, and Lunner T
- Abstract
Background: Persons with a hearing impairment have various experiences during their 'journey' through hearing loss. In our previous studies we have developed 'patient journey' models of person with hearing impairment and their communication partners (CPs). The study was aimed to evaluate the effectiveness of using the patient journey model in the internet-based pre-fitting counseling of a person with hearing disability (ClinicalTrials.gov Protocol Registration System: NCT01611129, registered 2012 May 14)., Method: The study employed a randomized controlled trial (RCT) with waiting list control (WLC) design. Even though we had intended to recruit 158 participants, we only managed to recruit 80 participants who were assigned to one of two groups: (1) Intervention group; and (2) WLC. Participants from both groups completed a 30 day internet-based counseling program (group 2 waited for a month before intervention) based on the 'patient journey' model. Various outcome measures which focus on self-reported hearing disability, self-reported depression and anxiety, readiness to change and self-reported hearing disability acceptance were administered pre- and post-intervention., Results: The trial results suggest that the intervention was not feasible. Treatment compliancy was one of the main problems with a high number of dropouts. Only 18 participants completed both pre- and post-intervention outcome measures. Their results were included in the analysis. Results suggest no statistically significant differences among groups over time in all four measures., Conclusions: Due to the limited sample size, no concrete conclusions can be drawn about the hypotheses from the current study. Furthermore, possible reasons for failure of this trial and directions for future research are discussed.
- Published
- 2014
- Full Text
- View/download PDF
46. The acceptance of hearing disability among adults experiencing hearing difficulties: a cross-sectional study.
- Author
-
C Manchaiah VK, Molander P, Rönnberg J, Andersson G, and Lunner T
- Subjects
- Anxiety etiology, Cross-Sectional Studies, Depression etiology, Depressive Disorder, Female, Hearing Loss complications, Humans, Male, Middle Aged, Surveys and Questionnaires, Adaptation, Psychological, Hearing Loss psychology
- Abstract
Objective: This study developed the Hearing Disability Acceptance Questionnaire (HDAQ) and tested its construct and concurrent validities., Design: Cross-sectional., Participants: A total of 90 participants who were experiencing hearing difficulties were recruited in the UK., Outcome Measures: The HDAQ was developed based on the Tinnitus Acceptance Questionnaire (TAQ). Participants completed self-report measures regarding hearing disability acceptance, hearing disability, symptoms of anxiety and depression and a measure of stages of change., Results: The HDAQ has a two-factor structure that explains 75.69% of its variance. The factors identified were activity engagement and avoidance and suppression. The scale showed a sufficient internal consistency (Cronbach's α=0.86). The HDAQ also had acceptable concurrent validity with regard to self-reported hearing disability, self-reported anxiety and depression and readiness to change measures., Conclusions: Acceptance is likely an important aspect of coping with chronic health conditions. To our knowledge, no previously published and validated scale measures the acceptance of hearing disability; therefore, the HDAQ might be useful in future research. However, the role of acceptance in adjusting to hearing disability must be further investigated.
- Published
- 2014
- Full Text
- View/download PDF
47. Importance of "process evaluation" in audiological rehabilitation: examples from studies on hearing impairment.
- Author
-
Manchaiah V, Danermark B, Rönnberg J, and Lunner T
- Abstract
The main focus of this paper is to discuss the importance of "evaluating the process of change" (i.e., process evaluation) in people with disability by studying their lived experiences. Detailed discussion is made about "why and how to investigate the process of change in people with disability?" and some specific examples are provided from studies on patient journey of persons with hearing impairment (PHI) and their communication partners (CPs). In addition, methodological aspects in process evaluation are discussed in relation to various metatheoretical perspectives. The discussion has been supplemented with relevant literature. The healthcare practice and disability research in general are dominated by the use of outcome measures. Even though the values of outcome measures are not questioned, there seems to be a little focus on understanding the process of change over time in relation to health and disability. We suggest that the process evaluation has an additional temporal dimension and has applications in both clinical practice and research in relation to health and disability.
- Published
- 2014
- Full Text
- View/download PDF
48. Similar digit-based working memory in deaf signers and hearing non-signers despite digit span differences.
- Author
-
Andin J, Orfanidou E, Cardin V, Holmer E, Capek CM, Woll B, Rönnberg J, and Rudner M
- Abstract
Similar working memory (WM) for lexical items has been demonstrated for signers and non-signers while short-term memory (STM) is regularly poorer in deaf than hearing individuals. In the present study, we investigated digit-based WM and STM in Swedish and British deaf signers and hearing non-signers. To maintain good experimental control we used printed stimuli throughout and held response mode constant across groups. We showed that deaf signers have similar digit-based WM performance, despite shorter digit spans, compared to well-matched hearing non-signers. We found no difference between signers and non-signers on STM span for letters chosen to minimize phonological similarity or in the effects of recall direction. This set of findings indicates that similar WM for signers and non-signers can be generalized from lexical items to digits and suggests that poorer STM in deaf signers compared to hearing non-signers may be due to differences in phonological similarity across the language modalities of sign and speech.
- Published
- 2013
- Full Text
- View/download PDF
49. Seeing the talker's face supports executive processing of speech in steady state noise.
- Author
-
Mishra S, Lunner T, Stenfelt S, Rönnberg J, and Rudner M
- Abstract
Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.
- Published
- 2013
- Full Text
- View/download PDF
50. The effects of working memory capacity and semantic cues on the intelligibility of speech in noise.
- Author
-
Zekveld AA, Rudner M, Johnsrude IS, and Rönnberg J
- Subjects
- Acoustic Stimulation, Adult, Auditory Threshold, Cues, Female, Humans, Male, Recognition, Psychology, Speech Reception Threshold Test, Task Performance and Analysis, Young Adult, Memory, Short-Term, Noise adverse effects, Perceptual Masking, Phonetics, Speech Intelligibility, Speech Perception
- Abstract
This study examined how semantically related information facilitates the intelligibility of spoken sentences in the presence of masking sound, and how this facilitation is influenced by masker type and by individual differences in cognitive functioning. Dutch sentences were masked by stationary noise, fluctuating noise, or an interfering talker. Each sentence was preceded by a text cue; cues were either three words that were semantically related to the sentence or three unpronounceable nonwords. Speech reception thresholds were adaptively measured. Additional measures included working memory capacity (reading span and size comparison span), linguistic closure ability (text reception threshold), and delayed sentence recognition. Word cues facilitated speech perception in noise similarly for all masker types. Cue benefit was related to reading span performance when the masker was interfering speech, but not when other maskers were used, and it did not correlate with text reception threshold or size comparison span. Better reading span performance was furthermore associated with enhanced delayed recognition of sentences preceded by word relative to nonword cues, across masker types. The results suggest that working memory capacity is associated with release from informational masking by semantically related information, and additionally with the encoding, storage, or retrieval of speech content in memory.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.