56 results on '"Einat Liebenthal"'
Search Results
2. Sex Effect on Presurgical Language Mapping in Patients With a Brain Tumor
- Author
-
Shun Yao, Einat Liebenthal, Parikshit Juvekar, Adomas Bunevicius, Matthew Vera, Laura Rigolo, Alexandra J. Golby, and Yanmei Tie
- Subjects
sex effect ,presurgical language mapping ,brain tumor ,functional MRI (fMRI) ,functional connectivity ,supplementary motor area (SMA) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Differences between males and females in brain development and in the organization and hemispheric lateralization of brain functions have been described, including in language. Sex differences in language organization may have important implications for language mapping performed to assess, and minimize neurosurgical risk to, language function. This study examined the effect of sex on the activation and functional connectivity of the brain, measured with presurgical functional magnetic resonance imaging (fMRI) language mapping in patients with a brain tumor. We carried out a retrospective analysis of data from neurosurgical patients treated at our institution who met the criteria of pathological diagnosis (malignant brain tumor), tumor location (left hemisphere), and fMRI paradigms [sentence completion (SC); antonym generation (AG); and resting-state fMRI (rs-fMRI)]. Forty-seven patients (22 females, mean age = 56.0 years) were included in the study. Across the SC and AG tasks, females relative to males showed greater activation in limited areas, including the left inferior frontal gyrus classically associated with language. In contrast, males relative to females showed greater activation in extended areas beyond the classic language network, including the supplementary motor area (SMA) and precentral gyrus. The rs-fMRI functional connectivity of the left SMA in the females was stronger with inferior temporal pole (TP) areas, and in the males with several midline areas. The findings are overall consistent with theories of greater reliance on specialized language areas in females relative to males, and generalized brain areas in males relative to females, for language function. Importantly, the findings suggest that sex could affect fMRI language mapping. Thus, considering sex as a variable in presurgical language mapping merits further investigation.
- Published
- 2020
- Full Text
- View/download PDF
3. Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI
- Author
-
Jain Mangalathu-Arumana, Einat Liebenthal, and Scott A. Beardsley
- Subjects
fMRI ,ERP ,EEG ,independent component analysis (ICA) ,multimodal neuroimaging ,modeling ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected.
- Published
- 2018
- Full Text
- View/download PDF
4. THE LANGUAGE, TONE AND PROSODY OF EMOTIONS: NEURAL DYNAMICS OF SPOKEN-WORD VALENCE PERCEPTION
- Author
-
Einat Liebenthal, David A Silbersweig, and Emily Stern
- Subjects
Amygdala ,Emotions ,Neural Pathways ,Speech Perception ,Word Processing ,fMRI ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala – a subcortical center for emotion perception – are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, appears more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
- Published
- 2016
- Full Text
- View/download PDF
5. DynAMoS: The Dynamic Affective Movie Clip Database for Subjectivity Analysis.
- Author
-
Jeffrey M. Girard, Yanmei Tie, and Einat Liebenthal
- Published
- 2023
- Full Text
- View/download PDF
6. The ILHBN: challenges, opportunities, and solutions from harmonizing data under heterogeneous study designs, target populations, and measurement protocols
- Author
-
Sy-Miin, Chow, Inbal, Nahum-Shani, Justin T, Baker, Donna, Spruijt-Metz, Nicholas B, Allen, Ryan P, Auerbach, Genevieve F, Dunton, Naomi P, Friedman, Stephen S, Intille, Predrag, Klasnja, Benjamin, Marlin, Matthew K, Nock, Scott L, Rauch, Misha, Pavel, Scott, Vrieze, David W, Wetter, Evan M, Kleiman, Timothy R, Brick, Heather, Perry, Dana L, Wolff-Hughes, and Einat, Liebenthal
- Subjects
Behavioral Neuroscience ,Applied Psychology - Abstract
The ILHBN is funded by the National Institutes of Health to collaboratively study the interactive dynamics of behavior, health, and the environment using Intensive Longitudinal Data (ILD) to (a) understand and intervene on behavior and health and (b) develop new analytic methods to innovate behavioral theories and interventions. The heterogenous study designs, populations, and measurement protocols adopted by the seven studies within the ILHBN created practical challenges, but also unprecedented opportunities to capitalize on data harmonization to provide comparable views of data from different studies, enhance the quality and utility of expensive and hard-won ILD, and amplify scientific yield. The purpose of this article is to provide a brief report of the challenges, opportunities, and solutions from some of the ILHBN's cross-study data harmonization efforts. We review the process through which harmonization challenges and opportunities motivated the development of tools and collection of metadata within the ILHBN. A variety of strategies have been adopted within the ILHBN to facilitate harmonization of ecological momentary assessment, location, accelerometer, and participant engagement data while preserving theory-driven heterogeneity and data privacy considerations. Several tools have been developed by the ILHBN to resolve challenges in integrating ILD across multiple data streams and time scales both within and across studies. Harmonization of distinct longitudinal measures, measurement tools, and sampling rates across studies is challenging, but also opens up new opportunities to address cross-cutting scientific themes of interest.Health behavior changes, such as prevention of suicidal thoughts and behaviors, smoking, drug use, and alcohol use; and the promotion of mental health, sleep, and physical activities, and decreases in sedentary behavior, are difficult to sustain. The ILHBN is a cooperative agreement network funded jointly by seven participating units within the National Institutes of Health to collaboratively study how factors that occur in individuals’ everyday life and in their natural environment influence the success of positive health behavior changes. This article discusses how information collected using smartphones, wearables, and other devices can provide helpful active and passive reflections of the participants’ extent of risk and resources at the moment for an extended period of time. However, successful engagement and retention of participants also require tailored adaptations of study designs, measurement tools, measurement intervals, study span, and device choices that create hurdles in integrating (harmonizing) data from multiple studies. We describe some of the challenges, opportunities, and solutions that emerged from harmonizing intensive longitudinal data under heterogeneous study and participant characteristics within the ILHBN, and share some tools and recommendations to facilitate future data harmonization efforts.
- Published
- 2022
- Full Text
- View/download PDF
7. 370. Linguistic and Non-Linguistic Digital Markers of Conceptual Disorganization in Psychotic Illness
- Author
-
Einat Liebenthal, Michaela Ennis, Habiballah Rahimi Eichi, Eric Lin, Yoonho Chung, and Justin Baker
- Subjects
Biological Psychiatry - Published
- 2023
- Full Text
- View/download PDF
8. Abnormal semantic processing of threat words associated with excitement and hostility symptoms in schizophrenia
- Author
-
Hong Pan, Emily Stern, Sara Dar, Adam Savitz, Thomas E. Smith, David Silbersweig, Einat Liebenthal, and Yulia Landa
- Subjects
medicine.medical_specialty ,Emotions ,Word processing ,Hostility ,Audiology ,Article ,Angular gyrus ,03 medical and health sciences ,0302 clinical medicine ,Inferior temporal gyrus ,medicine ,Humans ,Valence (psychology) ,Biological Psychiatry ,Brain ,Cognition ,medicine.disease ,Magnetic Resonance Imaging ,Semantics ,030227 psychiatry ,Psychiatry and Mental health ,Schizophrenia ,Anxiety ,medicine.symptom ,Psychology ,030217 neurology & neurosurgery - Abstract
Background Schizophrenia (SZ) is associated with devastating emotional, cognitive and language impairments. Understanding the deficits in each domain and their interactions is important for developing novel, targeted psychotherapies. This study tested whether negative-threat word processing is altered in individuals with SZ compared to healthy controls (HC), in relation to SZ symptom severity across domains. Methods Thirty-one SZ and seventeen HC subjects were scanned with functional magnetic resonance imaging while silently reading negative-threat and neutral words. Post-scan, subjects rated the valence of each word. The effects of group (SZ, HC), word type (negative, neutral), task period (early, late), and severity of clinical symptoms (positive, negative, excitement/hostility, cognitive, depression/anxiety), on word valence ratings and brain activation, were analyzed. Results SZ and HC subjects rated negative versus neutral words as more negative. The SZ subgroup with severe versus mild excitement/hostility symptoms rated the negative words as more negative. SZ versus HC subjects hyperactivated left language areas (angular gyrus, middle/inferior temporal gyrus (early period)) and the amygdala (early period) to negative words, and the amygdala (late period) to neutral words. In SZ, activation to negative versus neutral words in left dorsal temporal pole and dorsal anterior cingulate was positively correlated with excitement/hostility scores. Conclusions A negatively-biased behavioral response to negative-threat words was seen in SZ with severe versus mild excitement/hostility symptoms. The biased behavioral response was mediated by hyperactivation of brain networks associated with semantic processing of emotion concepts. Thus, word-level semantic processing may be a relevant psychotherapeutic target in SZ.
- Published
- 2021
- Full Text
- View/download PDF
9. EEG and fMRI coupling and decoupling based on joint independent component analysis (jICA)
- Author
-
Nicholas Heugel, Scott A. Beardsley, and Einat Liebenthal
- Subjects
Brain Mapping ,General Neuroscience ,Brain ,Neurovascular Coupling ,Electroencephalography ,Magnetic Resonance Imaging ,Article - Abstract
BACKGROUND: Meaningful integration of functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) requires knowing whether these measurements reflect the activity of the same neural sources, i.e., estimating the degree of coupling and decoupling between the neuroimaging modalities. NEW METHOD: This paper proposes a method to quantify the coupling and decoupling of fMRI and EEG signals based on the mixing matrix produced by joint independent component analysis (jICA). The method is termed fMRI/EEG-jICA. RESULTS: fMRI and EEG acquired during a syllable detection task with variable syllable presentation rates (0.25-3 Hz) were separated with jICA into two spatiotemporally distinct components, a primary component that increased nonlinearly in amplitude with syllable presentation rate, putatively reflecting an obligatory auditory response, and a secondary component that declined nonlinearly with syllable presentation rate, putatively reflecting an auditory attention orienting response. The two EEG subcomponents were of similar amplitude, but the secondary fMRI subcomponent was ten folds smaller than the primary one. COMPARISON TO EXISTING METHOD: FMRI multiple regression analysis yielded a map more consistent with the primary than secondary fMRI subcomponent of jICA, as determined by a greater area under the curve (0.5 versus 0.38) in a sensitivity and specificity analysis of spatial overlap. CONCLUSION: fMRI/EEG-jICA revealed spatiotemporally distinct brain networks with greater sensitivity than fMRI multiple regression analysis, demonstrating how this method can be used for leveraging EEG signals to inform the detection and functional characterization of fMRI signals. fMRI/EEG-jICA may be useful for studying neurovascular coupling at a macro-level, e.g., in neurovascular disorders.
- Published
- 2022
10. Movie-watching fMRI for presurgical language mapping in patients with brain tumour
- Author
-
Mark Vangel, Haijun Wang, Alexandra J. Golby, Laura Rigolo, Einat Liebenthal, Yanmei Tie, Shun Yao, and Fuxing Yang
- Subjects
Motion Pictures ,behavioral disciplines and activities ,Brain mapping ,Motion (physics) ,Article ,03 medical and health sciences ,0302 clinical medicine ,Aphasia ,Mind-wandering ,medicine ,Humans ,Language ,General linear model ,Brain Mapping ,medicine.diagnostic_test ,Brain Neoplasms ,Cognition ,Magnetic Resonance Imaging ,Functional imaging ,Psychiatry and Mental health ,Surgery ,Neurology (clinical) ,medicine.symptom ,Functional magnetic resonance imaging ,Psychology ,psychological phenomena and processes ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
The primary goal of presurgical language mapping is localising critical language areas with high sensitivity (ie, capturing areas in which resection could lead to language deficits) and specificity (ie, excluding non-language areas) and reliably determining language hemispheric dominance, on an individual basis. Language mapping is challenging due to the widely distributed functional organisation of language in the frontal, temporal and parietal lobes, and in neurosurgical patients the possibility of tumour-induced functional reorganisation. A major drawback of conventional task-based functional magnetic resonance imaging (tb-fMRI) recommended for presurgical language mapping1 is the contingency on patient performance of precisely timed tasks (eg, antonym generation—AntGen). Drawbacks of task-free resting-state fMRI (rs-fMRI) include confounding effects of ‘mind wandering’ and sensitivity to motion artefacts. In contrast, movie watching is a rich, stimulating and naturalistic activity, predicted to constrain cognitive processes and engage the distributed, multimodal neural networks supporting language function in real life.2 Our previous study demonstrated individual language mapping using movie-watching fMRI (mw-fMRI) in neurologically healthy subjects.3 Here, we examine mw-fMRI language mapping in presurgical patients with a brain tumour encroaching on putative language cortex, and varying levels of language disruption. We hypothesise that mw-fMRI versus AntGen tb-fMRI, and rs-fMRI, will provide comprehensive language mapping at reduced burden, as determined by metrics of in-scanner head motion, and mapping specificity, sensitivity and lateralisation. Mw-fMRI was compared with clinically indicated AntGen tb-fMRI in 34 patients with brain tumour undergoing presurgical language mapping, and with rs-fMRI in 22 of these patients. See online supplemental methods for exclusion criteria, and online supplemental table S1 for demographic and clinical information. Language maps were generated from tb-fMRI using a general linear model, and from mw-fMRI and rs-fMRI using independent …
- Published
- 2021
11. Sex Difference in Language Processing in Patients With Malignant Brain Tumors
- Author
-
Parikshit Juvekar, Alexandra J. Golby, Adomas Bunevicius, Einat Liebenthal, Yanmei Tie, Matthew Vera, Laura Rigolo, and Shun Yao
- Subjects
Oncology ,medicine.medical_specialty ,business.industry ,Internal medicine ,medicine ,Surgery ,In patient ,Neurology (clinical) ,business - Published
- 2019
- Full Text
- View/download PDF
12. The Effect of COVID-19 Shelter in Place Orders on Loneliness of Schizophrenia and Bipolar Disorder Patients
- Author
-
Justin T. Baker, Einat Liebenthal, Linda Valeri, Jukka-Pekka Onnela, Scott L. Rauch, Zixu Wang, Aijin Wang, Habiballah Rahimi Eichi, Lisa B. Dixon, Dost Öngür, and Russell Schutt
- Subjects
medicine.medical_specialty ,Shelter in place ,Coronavirus disease 2019 (COVID-19) ,Schizophrenia ,medicine ,Loneliness ,Bipolar disorder ,medicine.symptom ,Psychiatry ,Psychology ,medicine.disease ,Article ,Biological Psychiatry - Published
- 2021
- Full Text
- View/download PDF
13. Smartphone-Based Markers of Social Activity in Schizophrenia and Bipolar Disorder
- Author
-
Russell Schutt, Lisa B. Dixon, Jukka-Pekka Onnela, Aijin Wang, Scott L. Rauch, Justin T. Baker, Habiballah Rahimi Eichi, Zixu Wang, Linda Valeri, Einat Liebenthal, and Dost Öngür
- Subjects
medicine.medical_specialty ,business.industry ,Schizophrenia ,Social activity ,Medicine ,Bipolar disorder ,business ,medicine.disease ,Psychiatry ,Biological Psychiatry - Published
- 2021
- Full Text
- View/download PDF
14. Differential activation of the visual word form area during auditory phoneme perception in youth with dyslexia
- Author
-
Lisa L. Conant, Jeffrey R. Binder, Einat Liebenthal, Mark S. Seidenberg, and Anjali Desai
- Subjects
Male ,medicine.medical_specialty ,Speech perception ,Adolescent ,Cognitive Neuroscience ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Audiology ,050105 experimental psychology ,Article ,Dyslexia ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Phonetics ,Reading (process) ,Perception ,medicine ,Humans ,0501 psychology and cognitive sciences ,Visual word form area ,Child ,media_common ,Categorical perception ,05 social sciences ,Speech processing ,medicine.disease ,Magnetic Resonance Imaging ,Temporal Lobe ,Reading ,Learning disability ,Speech Perception ,Female ,Occipital Lobe ,medicine.symptom ,Psychology ,030217 neurology & neurosurgery - Abstract
Developmental dyslexia is a learning disorder characterized by difficulties reading words accurately and/or fluently. Several behavioral studies have suggested the presence of anomalies at an early stage of phoneme processing, when the complex spectrotemporal patterns in the speech signal are analyzed and assigned to phonemic categories. In this study, fMRI was used to compare brain responses associated with categorical discrimination of speech syllables (P) and acoustically matched nonphonemic stimuli (N) in children and adolescents with dyslexia and in typically developing (TD) controls, aged 8-17 years. The TD group showed significantly greater activation during the P condition relative to N in an area of the left ventral occipitotemporal cortex that corresponds well with the region referred to as the "visual word form area" (VWFA). Regression analyses using reading performance as a continuous variable across the full group of participants yielded similar results. Overall, the findings are consistent with those of previous neuroimaging studies using print stimuli in individuals with dyslexia that found reduced activation in left occipitotemporal regions; however, the current study shows that these activation differences seen during reading are apparent during auditory phoneme discrimination in youth with dyslexia, suggesting that the primary deficit in at least a subset of children may lie early in the speech processing stream and that categorical perception may be an important target of early intervention in children at risk for dyslexia.
- Published
- 2019
15. Loneliness of Schizophrenia and Bipolar Disorder Patients in a Multi-Year mHealth Study
- Author
-
Russell Schutt, Lisa B. Dixon, Zixu Wang, Einat Liebenthal, Justin T. Baker, Dost Öngür, Jukka-Pekka Onnela, Scott L. Rauch, Habiballah Rahimi Eichi, Linda Valeri, and Aijin Wang
- Subjects
medicine.medical_specialty ,business.industry ,Schizophrenia ,medicine ,Loneliness ,Bipolar disorder ,medicine.symptom ,Psychiatry ,medicine.disease ,business ,mHealth ,Biological Psychiatry - Published
- 2021
- Full Text
- View/download PDF
16. Towards Phenotyping Treatment Response in Borderline Personality Disorder: Quantitative Language Analysis of Clinical Interviews
- Author
-
Justin T. Baker, Katie Fairbank-Haynes, Nathaniel Shogren, Eric Lin, Blaise Aguirre, and Einat Liebenthal
- Subjects
Treatment response ,medicine ,Language analysis ,medicine.disease ,Psychology ,Borderline personality disorder ,Biological Psychiatry ,Clinical psychology - Published
- 2021
- Full Text
- View/download PDF
17. Method for spatial overlap estimation of electroencephalography and functional magnetic resonance imaging responses
- Author
-
Scott A. Beardsley, N. Heugel, and Einat Liebenthal
- Subjects
Adult ,0301 basic medicine ,Computer science ,Auditory oddball ,Posterior parietal cortex ,Electroencephalography ,Article ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Evoked Potentials ,Source model ,Cerebral Cortex ,Brain Mapping ,medicine.diagnostic_test ,business.industry ,General Neuroscience ,Neurosciences ,Potential field ,Signal Processing, Computer-Assisted ,Pattern recognition ,Event-Related Potentials, P300 ,Magnetic Resonance Imaging ,030104 developmental biology ,Transformation (function) ,Auditory Perception ,A priori and a posteriori ,Artificial intelligence ,Functional magnetic resonance imaging ,business ,030217 neurology & neurosurgery - Abstract
Background Simultaneous functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) measurements may represent activity from partially divergent neural sources, but this factor is seldom modeled in fMRI-EEG data integration. New method This paper proposes an approach to estimate the spatial overlap between sources of activity measured simultaneously with fMRI and EEG. Following the extraction of task-related activity, the key steps include, 1) distributed source reconstruction of the task-related ERP activity (ERP source model), 2) transformation of fMRI activity to the ERP spatial scale by forward modelling of the scalp potential field distribution and backward source reconstruction (fMRI source simulation), and 3) optimization of fMRI and ERP thresholds to maximize spatial overlap without a priori constraints of coupling (overlap calculation). Results FMRI and ERP responses were recorded simultaneously in 15 subjects performing an auditory oddball task. A high degree of spatial overlap between sources of fMRI and ERP responses (in 9 or more of 15 subjects) was found specifically within temporoparietal areas associated with the task. Areas of non-overlap in fMRI and ERP sources were relatively small and inconsistent across subjects. Comparison with existing method The ERP and fMRI sources estimated with solely jICA overlapped in just 4 of 15 subjects, and strictly in the parietal cortex. Conclusion The study demonstrates that the new fMRI-ERP spatial overlap estimation method provides greater spatiotemporal detail of the cortical dynamics than solely jICA. As such, we propose that it is a superior method for the integration of fMRI and EEG to study brain function.
- Published
- 2019
- Full Text
- View/download PDF
18. Differential Rates of Perinatal Maturation of Human Primary and Nonprimary Auditory Cortex
- Author
-
Jeffrey J. Neil, Kush Kapur, Brian B. Monson, Einat Liebenthal, Terrie E. Inder, Cynthia E. Rogers, Christopher D. Smyser, Abraham Brownell, Zach Eaton-Rosen, and Simon K. Warfield
- Subjects
Male ,Physiology ,audition ,Development ,Auditory cortex ,Cohort Studies ,03 medical and health sciences ,0302 clinical medicine ,Neuroimaging ,Gyrus ,030225 pediatrics ,Cortex (anatomy) ,Medicine ,Humans ,cortical development ,preterm infants ,Gray Matter ,Auditory Cortex ,neuroimaging ,medicine.diagnostic_test ,business.industry ,General Neuroscience ,Postmenstrual Age ,Infant, Newborn ,2.1 ,Magnetic resonance imaging ,General Medicine ,New Research ,diffusion tensor imaging ,White Matter ,medicine.anatomical_structure ,Diffusion Magnetic Resonance Imaging ,Cerebral cortex ,Child, Preschool ,Female ,business ,030217 neurology & neurosurgery ,Child Language ,Infant, Premature ,Diffusion MRI - Abstract
Primary and nonprimary cerebral cortex mature along different timescales; however, the differences between the rates of maturation of primary and nonprimary cortex are unclear. Cortical maturation can be measured through changes in tissue microstructure detectable by diffusion magnetic resonance imaging (MRI). In this study, diffusion tensor imaging (DTI) was used to characterize the maturation of Heschl’s gyrus (HG), which contains both primary auditory cortex (pAC) and nonprimary auditory cortex (nAC), in 90 preterm infants between 26 and 42 weeks postmenstrual age (PMA). The preterm infants were in different acoustical environments during their hospitalization: 46 in open ward beds and 44 in single rooms. A control group consisted of 15 term-born infants. Diffusion parameters revealed that (1) changes in cortical microstructure that accompany cortical maturation had largely already occurred in pAC by 28 weeks PMA, and (2) rapid changes were taking place in nAC between 26 and 42 weeks PMA. At term equivalent PMA, diffusion parameters for auditory cortex were different between preterm infants and term control infants, reflecting either delayed maturation or injury. No effect of room type was observed. For the preterm group, disturbed maturation of nonprimary (but not primary) auditory cortex was associated with poorer language performance at age two years.
- Published
- 2018
19. An interactive model of auditory-motor speech perception
- Author
-
Einat Liebenthal and Riikka Möttönen
- Subjects
Linguistics and Language ,Speech perception ,Cognitive Neuroscience ,media_common.quotation_subject ,medicine.medical_treatment ,Experimental and Cognitive Psychology ,Electroencephalography ,Somatosensory system ,050105 experimental psychology ,Language and Linguistics ,Article ,03 medical and health sciences ,Speech and Hearing ,0302 clinical medicine ,Perception ,Motor speech ,medicine ,Connectome ,Humans ,0501 psychology and cognitive sciences ,media_common ,Auditory Cortex ,medicine.diagnostic_test ,05 social sciences ,Motor Cortex ,Speech processing ,Transcranial magnetic stimulation ,medicine.anatomical_structure ,Speech Perception ,Psychology ,030217 neurology & neurosurgery ,Psychomotor Performance ,Motor cortex ,Cognitive psychology - Abstract
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 msec. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
- Published
- 2017
20. Editorial: Neural Mechanisms of Perceptual Categorization as Precursors to Speech Perception
- Author
-
Einat Liebenthal and Lynne E. Bernstein
- Subjects
0301 basic medicine ,Categorical perception ,neuroimaging ,Speech perception ,General Neuroscience ,neural mechanism ,audiovisual processing ,speech perception ,categorization ,03 medical and health sciences ,Editorial ,030104 developmental biology ,0302 clinical medicine ,Neuroimaging ,Categorization ,Concept learning ,phonemic perception ,category learning ,Neurocomputational speech processing ,Psychology ,Perceptual categorization ,auditory processing ,030217 neurology & neurosurgery ,Neuroscience ,Cognitive psychology - Published
- 2017
- Full Text
- View/download PDF
21. Neural Mechanisms of Perceptual Categorization as Precursors to Speech Perception
- Author
-
Einat Liebenthal and Lynne E. Bernstein
- Subjects
Categorical perception ,Speech perception ,Neuroimaging ,Categorization ,Perceptual learning ,Concept learning ,Neurocomputational speech processing ,Psychology ,Perceptual categorization ,Cognitive psychology - Published
- 2017
- Full Text
- View/download PDF
22. Neural effects of cognitive control load on auditory selective attention
- Author
-
Colin Humphries, Anjali Desai, Jain Mangalathu, Merav Sabri, Jeffrey R. Binder, Einat Liebenthal, and Matthew D. Verber
- Subjects
Adult ,Male ,Auditory perception ,medicine.medical_specialty ,Cognitive Neuroscience ,Experimental and Cognitive Psychology ,Neuropsychological Tests ,Audiology ,Auditory cortex ,Multimodal Imaging ,behavioral disciplines and activities ,Spatial memory ,Article ,Behavioral Neuroscience ,Superior temporal gyrus ,Reaction Time ,medicine ,Humans ,Attention ,Prefrontal cortex ,Evoked Potentials ,medicine.diagnostic_test ,Working memory ,Brain ,Electroencephalography ,Signal Processing, Computer-Assisted ,Cognition ,Magnetic Resonance Imaging ,Memory, Short-Term ,Auditory Perception ,Female ,Psychology ,Functional magnetic resonance imaging ,Cognitive psychology - Abstract
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention.
- Published
- 2014
- Full Text
- View/download PDF
23. Perceptual Demand Modulates Activation of Human Auditory Cortex in Response to Task-irrelevant Sounds
- Author
-
Colin Humphries, Anjali Desai, Jeffrey R. Binder, Merav Sabri, Matthew D. Verber, Jain Mangalathu, and Einat Liebenthal
- Subjects
Adult ,Male ,Auditory perception ,medicine.medical_specialty ,Cognitive Neuroscience ,media_common.quotation_subject ,Stimulus (physiology) ,Audiology ,Electroencephalography ,computer.software_genre ,Auditory cortex ,Article ,Functional Laterality ,Dichotic Listening Tests ,Young Adult ,InformationSystems_MODELSANDPRINCIPLES ,Perception ,Image Processing, Computer-Assisted ,Reaction Time ,medicine ,Humans ,Psychoacoustics ,Audio signal processing ,media_common ,Auditory Cortex ,Analysis of Variance ,Brain Mapping ,Communication ,medicine.diagnostic_test ,business.industry ,Dichotic listening ,Magnetic Resonance Imaging ,Oxygen ,Sound ,Acoustic Stimulation ,Auditory Perception ,Evoked Potentials, Auditory ,Female ,Psychology ,business ,computer - Abstract
In the visual modality, perceptual demand on a goal-directed task have been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in a region of interest in middle left superior temporal gyrus and in negative ERP activity 130–230 ms post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.
- Published
- 2013
- Full Text
- View/download PDF
24. The Language, Tone and Prosody of Emotions: Neural Substrates and Dynamics of Spoken-Word Emotion Perception
- Author
-
Emily Stern, David Silbersweig, and Einat Liebenthal
- Subjects
Speech perception ,media_common.quotation_subject ,Word processing ,ERPs (event-related potentials) ,Review ,emotions ,speech perception ,050105 experimental psychology ,Lateralization of brain function ,03 medical and health sciences ,0302 clinical medicine ,Emotion perception ,Perception ,word processing ,0501 psychology and cognitive sciences ,Prosody ,semantics ,media_common ,General Neuroscience ,05 social sciences ,Subliminal stimuli ,fMRI ,amygdala ,Emotional prosody ,voice perception ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology ,Neuroscience - Abstract
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala – a subcortical center for emotion perception – are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, appears more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
- Published
- 2016
- Full Text
- View/download PDF
25. The relationship between maternal education and the neural substrates of phoneme perception in children: Interactions between socioeconomic status and proficiency level
- Author
-
Lisa L. Conant, Jeffrey R. Binder, Einat Liebenthal, and Anjali Desai
- Subjects
Male ,Linguistics and Language ,Speech perception ,Cognitive Neuroscience ,Experimental and Cognitive Psychology ,Affect (psychology) ,Vocabulary ,050105 experimental psychology ,Language and Linguistics ,Article ,Developmental psychology ,03 medical and health sciences ,Speech and Hearing ,Nonverbal communication ,0302 clinical medicine ,Borderline intellectual functioning ,Child Development ,Phonological awareness ,Phonetics ,Humans ,0501 psychology and cognitive sciences ,Child ,Language ,05 social sciences ,Neuropsychology ,Linguistics ,Awareness ,Child development ,Magnetic Resonance Imaging ,Sound ,Categorization ,Social Class ,Speech Perception ,Educational Status ,Female ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Relationships between maternal education (ME) and both behavioral performances and brain activation during the discrimination of phonemic and nonphonemic sounds were examined using fMRI in children with different levels of phoneme categorization proficiency (CP). Significant relationships were found between ME and intellectual functioning and vocabulary, with a trend for phonological awareness. A significant interaction between CP and ME was seen for nonverbal reasoning abilities. In addition, fMRI analyses revealed a significant interaction between CP and ME for phonemic discrimination in left prefrontal cortex. Thus, ME was associated with differential patterns of both neuropsychological performance and brain activation contingent on the level of CP. These results highlight the importance of examining SES effects at different proficiency levels. The pattern of results may suggest the presence of neurobiological differences in the children with low CP that affect the nature of relationships with ME.
- Published
- 2016
26. Within-subject joint independent component analysis of simultaneous fMRI/ERP in an auditory oddball paradigm
- Author
-
Einat Liebenthal, Scott A. Beardsley, and Jain Mangalathu-Arumana
- Subjects
Adult ,Male ,medicine.medical_specialty ,Adolescent ,genetic structures ,Cognitive Neuroscience ,Speech recognition ,Electroencephalography ,Audiology ,behavioral disciplines and activities ,Brain mapping ,Article ,Young Adult ,Event-related potential ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Oddball paradigm ,Brain Mapping ,Principal Component Analysis ,medicine.diagnostic_test ,Brain ,Magnetic resonance imaging ,Event-Related Potentials, P300 ,Magnetic Resonance Imaging ,Independent component analysis ,Acoustic Stimulation ,Neurology ,Principal component analysis ,Female ,Functional magnetic resonance imaging ,Psychology ,psychological phenomena and processes - Abstract
The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post- central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI.
- Published
- 2012
- Full Text
- View/download PDF
27. Specialization along the Left Superior Temporal Sulcus for Auditory Categorization
- Author
-
Brinda Ramachandran, Einat Liebenthal, Michael M. Ellingson, Jeffrey R. Binder, Anjali Desai, and Rutvik H. Desai
- Subjects
Auditory perception ,Adult ,Male ,medicine.medical_specialty ,Cognitive Neuroscience ,speech ,Audiology ,Auditory cortex ,050105 experimental psychology ,Temporal lobe ,electroencephalograph ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Young Adult ,0302 clinical medicine ,Event-related potential ,medicine ,auditory cortex ,Humans ,0501 psychology and cognitive sciences ,Evoked Potentials ,Temporal cortex ,Cerebral Cortex ,Brain Mapping ,training ,medicine.diagnostic_test ,05 social sciences ,fMRI ,Superior temporal sulcus ,Articles ,Middle Aged ,Magnetic Resonance Imaging ,Categorization ,Acoustic Stimulation ,Auditory Perception ,Female ,Functional magnetic resonance imaging ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
The affinity and temporal course of functional fields in middle and posterior superior temporal cortex for the categorization of complex sounds was examined using functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) recorded simultaneously. Data were compared before and after subjects were trained to categorize a continuum of unfamiliar nonphonemic auditory patterns with speech-like properties (NP) and a continuum of familiar phonemic patterns (P). fMRI activation for NP increased after training in left posterior superior temporal sulcus (pSTS). The ERP P2 response to NP also increased with training, and its scalp topography was consistent with left posterior superior temporal generators. In contrast, the left middle superior temporal sulcus (mSTS) showed fMRI activation only for P, and this response was not affected by training. The P2 response to P was also independent of training, and its estimated source was more anterior in left superior temporal cortex. Results are consistent with a role for left pSTS in short-term representation of relevant sound features that provide the basis for identifying newly acquired sound categories. Categorization of highly familiar phonemic patterns is mediated by long-term representations in left mSTS. Results provide new insight regarding the function of ventral and dorsal auditory streams.
- Published
- 2010
28. Left Posterior Temporal Regions are Sensitive to Auditory Categorization
- Author
-
Rutvik H. Desai, Eric J. Waldron, Einat Liebenthal, and Jeffrey R. Binder
- Subjects
Adult ,Male ,Auditory perception ,medicine.medical_specialty ,Sound Spectrography ,Speech perception ,Adolescent ,Cognitive Neuroscience ,media_common.quotation_subject ,Audiology ,Brain mapping ,Article ,Functional Laterality ,Temporal lobe ,Discrimination, Psychological ,Phonetics ,Perception ,Image Processing, Computer-Assisted ,medicine ,Humans ,media_common ,Brain Mapping ,medicine.diagnostic_test ,Magnetic Resonance Imaging ,Temporal Lobe ,Oxygen ,Logistic Models ,Acoustic Stimulation ,Categorization ,Auditory Perception ,Female ,Functional magnetic resonance imaging ,Psychology ,Cognitive psychology - Abstract
Recent studies suggest that the left superior temporal gyrus and sulcus (LSTG/S) play a role in speech perception, although the precise function of these areas remains unclear. Here, we test the hypothesis that regions in the LSTG/S play a role in the categorization of speech phonemes, irrespective of the acoustic properties of the sounds and prior experience of the listener with them. We examined changes in functional magnetic resonance imaging brain activation related to a perceptual shift from nonphonetic to phonetic analysis of sine-wave speech analogs. Subjects performed an identification task before scanning and a discrimination task during scanning with phonetic (P) and nonphonetic (N) sine-wave sounds, both before (Pre) and after (Post) being exposed to the phonetic properties of the P sounds. Behaviorally, experience with the P sounds induced categorical identification of these sounds. In the PostP > PreP and PostP > PostN contrasts, an area in the posterior LSTG/S was activated. For both P and N sounds, the activation in this region was correlated with the degree of categorical identification in individual subjects. The results suggest that these areas in the posterior LSTG/S are sensitive neither to the acoustic properties of speech nor merely to the presence of phonetic information, but rather to the listener's awareness of category representations for auditory inputs.
- Published
- 2008
- Full Text
- View/download PDF
29. Time course of semantic processes during sentence comprehension: An fMRI study
- Author
-
Jeffrey R. Binder, Colin Humphries, Einat Liebenthal, and David A. Medler
- Subjects
Adult ,Male ,Cognitive Neuroscience ,Stimulus (physiology) ,computer.software_genre ,Brain mapping ,Article ,Temporal lobe ,Mental Processes ,Humans ,Semantic memory ,Language ,Brain Mapping ,Neural correlates of consciousness ,business.industry ,Brain ,Middle Aged ,Magnetic Resonance Imaging ,Semantics ,Oxygen ,Comprehension ,Pseudoword ,Reading ,Neurology ,Regression Analysis ,Female ,Artificial intelligence ,business ,Psychology ,computer ,Sentence ,Natural language processing ,Cognitive psychology - Abstract
The ability to create new meanings from combinations of words is one important function of the language system. We investigated the neural correlates of combinatorial semantic processing using fMRI. During scanning, participants performed a rating task on auditory word or pseudoword strings that differed in the presence of combinatorial and word-level semantic information. Stimuli included normal sentences comprised of thematically related words that could be readily combined to produce a more complex meaning, semantically incongruent sentences in which content words were randomly replaced with other content words, pseudoword sentences, and versions of these three sentence types in which syntactic structure was removed by randomly re-ordering the words. Several regions showed greater BOLD signal for stimuli with words than for those with pseudowords, including the left angular gyrus, left superior temporal sulcus, and left inferior frontal gyrus, suggesting that these areas are involved in semantic access at the single word level. In the angular and inferior frontal gyri these differences emerged early in the course of the hemodynamic response. An effect of combinatorial semantic structure was observed in the left angular gyrus and left lateral temporal lobe, which showed greater activation for normal compared to semantically incongruent sentences. These effects appeared later in the time course of the hemodynamic response, beginning after the entire stimulus had been presented. The data indicate a complex spatiotemporal pattern of activity associated with computation of word and sentence-level semantic information, and suggest a particular role for the left angular gyrus in processing overall sentence meaning.
- Published
- 2007
- Full Text
- View/download PDF
30. Syntactic and Semantic Modulation of Neural Activity during Auditory Sentence Comprehension
- Author
-
Einat Liebenthal, Jeffrey R. Binder, Colin Humphries, and David A. Medler
- Subjects
Adult ,Male ,Phrase ,Cognitive Neuroscience ,Functional Laterality ,Article ,Angular gyrus ,Image Processing, Computer-Assisted ,Humans ,Semantic memory ,Brain Mapping ,Psycholinguistics ,Brain ,Superior temporal sulcus ,Middle Aged ,Magnetic Resonance Imaging ,Syntax ,Linguistics ,Semantics ,Pseudoword ,Female ,Comprehension ,Psychology ,Sentence ,Word order ,Cognitive psychology - Abstract
In previous functional neuroimaging studies, left anterior temporal and temporal-parietal areas responded more strongly to sentences than to randomly ordered lists of words. The smaller response for word lists could be explained by either (1) less activation of syntactic processes due to the absence of syntactic structure in the random word lists or (2) less activation of semantic processes resulting from failure to combine the content words into a global meaning. To test these two explanations, we conducted a functional magnetic resonance imaging study in which word order and combinatorial word meaning were independently manipulated during auditory comprehension. Subjects heard six different stimuli: normal sentences, semantically incongruent sentences in which content words were randomly replaced with other content words, pseudoword sentences, and versions of these three sentence types in which word order was randomized to remove syntactic structure. Effects of syntactic structure (greater activation to sentences than to word lists) were observed in the left anterior superior temporal sulcus and left angular gyrus. Semantic effects (greater activation to semantically congruent stimuli than either incongruent or pseudoword stimuli) were seen in widespread, bilateral temporal lobe areas and the angular gyrus. Of the two regions that responded to syntactic structure, the angular gyrus showed a greater response to semantic structure, suggesting that reduced activation for word lists in this area is related to a disruption in semantic processing. The anterior temporal lobe, on the other hand, was relatively insensitive to manipulations of semantic structure, suggesting that syntactic information plays a greater role in driving activation in this area.
- Published
- 2006
- Full Text
- View/download PDF
31. Neural pathways for visual speech perception
- Author
-
Einat Liebenthal and Lynne E. Bernstein
- Subjects
Speech production ,Speech perception ,genetic structures ,Speech recognition ,Lipreading ,functional organization ,Review Article ,050105 experimental psychology ,Speech shadowing ,lcsh:RC321-571 ,Visual processing ,03 medical and health sciences ,0302 clinical medicine ,otorhinolaryngologic diseases ,Psychology ,0501 psychology and cognitive sciences ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Cued speech ,Motor theory of speech perception ,Visual Processing ,General Neuroscience ,05 social sciences ,Speech processing ,eye diseases ,Audiovisual processing ,Speech Perception ,Neurocomputational speech processing ,030217 neurology & neurosurgery - Abstract
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.
- Published
- 2014
- Full Text
- View/download PDF
32. Some neurophysiological constraints on models of word naming
- Author
-
Lisa L. Conant, Rutvik H. Desai, David A. Medler, Jeffrey R. Binder, and Einat Liebenthal
- Subjects
Adult ,Male ,Adolescent ,Cognitive Neuroscience ,media_common.quotation_subject ,Decision Making ,Models, Neurological ,Fixation, Ocular ,Pronunciation ,Psycholinguistics ,Angular gyrus ,Reading (process) ,Image Processing, Computer-Assisted ,Reaction Time ,Humans ,Speech ,Semantic memory ,Attention ,Prefrontal cortex ,media_common ,Brain Mapping ,Phonology ,Middle Aged ,Magnetic Resonance Imaging ,Memory, Short-Term ,Reading ,Neurology ,Female ,Nerve Net ,Psychology ,Psychomotor Performance ,Word (group theory) ,Cognitive psychology - Abstract
The pronunciation of irregular words in deep orthographies like English cannot be specified by simple rules. On the other hand, the fact that novel letter strings can be pronounced seems to imply the existence of such rules. These facts motivate dual-route models of word naming, which postulate separate lexical (whole-word) and non-lexical (rule-based) mechanisms for accessing phonology. We used fMRI during oral naming of irregular words, regular words, and nonwords, to test this theory against a competing single-mechanism account known as the triangle model, which proposes that all words are handled by a single system containing distributed orthographic, phonological, and semantic codes rather than word codes. Two versions of the dual-route model were distinguished: an 'exclusive' version in which activation of one processing route predominates over the other, and a 'parallel' version in which both routes are equally activated by all words. The fMRI results provide no support for the exclusive dual-route model. Several frontal, insular, anterior cingulate, and parietal regions showed responses that increased with naming difficulty (nonword > irregular word > regular word) and were correlated with response time, but there was no activation consistent with the predicted response of a non-lexical, rule-based mechanism (i.e., nonword > regular word > irregular word). Several regions, including the angular gyrus and dorsal prefrontal cortex bilaterally, left ventromedial temporal lobe, and posterior cingulate gyrus, were activated more by words than nonwords, but these 'lexical route' regions were equally active for irregular and regular words. The results are compatible with both the parallel dual-route model and the triangle model. 'Lexical route' regions also showed effects of word imageability. Together with previous imaging studies using semantic task contrasts, the imageability effects are consistent with semantic processing in these brain regions, suggesting that word naming is partly semantically-mediated.
- Published
- 2005
- Full Text
- View/download PDF
33. Neural Substrates of Phonemic Perception
- Author
-
Jeffrey R. Binder, David A. Medler, Einat Liebenthal, E. T. Possing, and Stephanie M. Spitzer
- Subjects
Adult ,Male ,medicine.medical_specialty ,Speech perception ,Cognitive Neuroscience ,Middle temporal gyrus ,Audiology ,Auditory cortex ,Functional Laterality ,Lateralization of brain function ,Temporal lobe ,Cellular and Molecular Neuroscience ,Superior temporal gyrus ,Discrimination, Psychological ,Image Processing, Computer-Assisted ,medicine ,Humans ,Speech ,Brain Mapping ,Categorical perception ,Superior temporal sulcus ,Middle Aged ,Magnetic Resonance Imaging ,Temporal Lobe ,Oxygen ,Acoustic Stimulation ,Speech Perception ,Female ,Psychology - Abstract
The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.
- Published
- 2005
- Full Text
- View/download PDF
34. Simultaneous ERP and fMRI of the auditory cortex in a passive oddball paradigm
- Author
-
Marianna V. Spanaki, Michael L. Ellingson, Jeffrey R. Binder, Thomas E. Prieto, Kristina M. Ropella, and Einat Liebenthal
- Subjects
Adult ,Male ,medicine.medical_specialty ,Cognitive Neuroscience ,Speech recognition ,Mismatch negativity ,Contingent Negative Variation ,Electroencephalography ,Audiology ,Auditory cortex ,Temporal lobe ,Pitch Discrimination ,Oxygen Consumption ,Image Processing, Computer-Assisted ,medicine ,Humans ,Attention ,Oddball paradigm ,Auditory Cortex ,Temporal cortex ,Brain Mapping ,medicine.diagnostic_test ,Middle Aged ,Magnetic Resonance Imaging ,Temporal Lobe ,Neurology ,Evoked Potentials, Auditory ,Female ,Nerve Net ,Arousal ,Functional magnetic resonance imaging ,Psychology ,Brodmann area - Abstract
Infrequent occurrences of a deviant sound within a sequence of repetitive standard sounds elicit the automatic mismatch negativity (MMN) event-related potential (ERP). The main MMN generators are located in the superior temporal cortex, but their number, precise location, and temporal sequence of activation remain unclear. In this study, ERP and functional magnetic resonance imaging (fMRI) data were obtained simultaneously during a passive frequency oddball paradigm. There were three conditions, a STANDARD, a SMALL deviant, and a LARGE deviant. A clustered image acquisition technique was applied to prevent contamination of the fMRI data by the acoustic noise of the scanner and to limit contamination of the electroencephalogram (EEG) by the gradient-switching artifact. The ERP data were used to identify areas in which the blood oxygenation (BOLD) signal varied with the magnitude of the negativity in each condition. A significant ERP MMN was obtained, with larger peaks to LARGE deviants and with frontocentral scalp distribution, consistent with the MMN reported outside the magnetic field. This result validates the experimental procedures for simultaneous ERP/fMRI of the auditory cortex. Main foci of increased BOLD signal were observed in the right superior temporal gyrus [STG; Brodmann area (BA) 22] and right superior temporal plane (STP; BA 41 and 42). The imaging results provide new information supporting the idea that generators in the right lateral aspect of the STG are implicated in processes of frequency deviant detection, in addition to generators in the right and left STP.
- Published
- 2003
- Full Text
- View/download PDF
35. The functional organization of the left STS: a large scale meta-analysis of PET and fMRI studies of healthy adults
- Author
-
Colin Humphries, Anjali Desai, Einat Liebenthal, Rutvik H. Desai, and Merav Sabri
- Subjects
Speech perception ,superior temporal sulcus – STS ,functional organization ,semantic processing ,speech perception ,Lateralization of brain function ,lcsh:RC321-571 ,Angular gyrus ,Supramarginal gyrus ,medicine ,Semantic memory ,Psychology ,Original Research Article ,positron emission tomography (PET) ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,medicine.diagnostic_test ,General Neuroscience ,functional magnetic resonance imaging (fMRI) ,Human brain ,Superior temporal sulcus ,superior temporal sulcus (STS) ,meta-analysis ,medicine.anatomical_structure ,Functional magnetic resonance imaging ,Neuroscience ,left hemisphere - Abstract
The superior temporal sulcus (STS) in the left hemisphere is functionally diverse, with sub-areas implicated in both linguistic and non-linguistic functions. However, the number and boundaries of distinct functional regions remain to be determined. Here, we present new evidence, from meta-analysis of a large number of positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies, of different functional specificity in the left STS supporting a division of its middle to terminal extent into at least three functional areas. The middle portion of the left STS stem (fmSTS) is highly specialized for speech perception and the processing of language material. The posterior portion of the left STS stem (fpSTS) is highly versatile and involved in multiple functions supporting semantic memory and associative thinking. The fpSTS responds to both language and non-language stimuli but the sensitivity to non-language material is greater. The horizontal portion of the left STS stem and terminal ascending branches (ftSTS) display intermediate functional specificity, with the anterior ascending branch adjoining the supramarginal gyrus (fatSTS) supporting executive functions and motor planning and showing greater sensitivity to language material, and the horizontal stem and posterior ascending branch adjoining the angular gyrus (fptSTS) supporting primarily semantic processing and displaying greater sensitivity to non-language material. We suggest that the high functional specificity of the left fmSTS for speech is an important means by which the human brain achieves exquisite affinity and efficiency for native speech perception. In contrast, the extreme multi-functionality of the left fpSTS reflects the role of this area as a cortical hub for semantic processing and the extraction of meaning from multiple sources of information. Finally, in the left ftSTS, further functional differentiation between the dorsal and ventral aspect is warranted.
- Published
- 2014
36. Active and Passive fMRI for Presurgical Mapping of Motor and Language Cortex
- Author
-
Einat Liebenthal, Bradley G. Goodyear, and Victoria Mosher
- Subjects
medicine.anatomical_structure ,Cortex (anatomy) ,medicine ,Psychology ,Neuroscience - Published
- 2014
- Full Text
- View/download PDF
37. Human auditory cortex electrophysiological correlates of the precedence effect: Binaural echo lateralization suppression
- Author
-
Einat Liebenthal and Hillel Pratt
- Subjects
Physics ,medicine.medical_specialty ,Acoustics and Ultrasonics ,Binaural processing ,Acoustics ,Echo (computing) ,Monaural ,Audiology ,Auditory cortex ,Lateralization of brain function ,Electrophysiology ,Arts and Humanities (miscellaneous) ,Precedence effect ,medicine ,Binaural recording - Abstract
Echoes lagging shortly after a sound and originating from a different location blend with the sound source perceptually. The location of the fused “source+echoes” is dominated by the source, suggesting suppression of echo localization. This effect is diminished monaurally, implying involvement of binaural processing. The neural substrates underlying the echo localization suppression are still unclear. The electrophysiological indications of primary auditory cortex involvement in binaural suppression of echo lateralization are presented. Position judgment and auditory-evoked potentials (AEPs) were recorded to single- and pairs of binaural and monaural clicks. The pairs simulated a source and its echo. The binaural source+echo position judgment was dominated by the source at small echo lags. With lag increase, it shifted toward the echo. The AEPs were studied for binaural processes specific to a “real” echo, as opposed to an identical single sound (a “virtual” echo). A reduction in binaural peak amplitude a...
- Published
- 1999
- Full Text
- View/download PDF
38. Neural dynamics of phonological processing in the dorsal auditory stream
- Author
-
Einat Liebenthal, Jain Mangalathu-Arumana, Anjali Desai, Scott A. Beardsley, and Merav Sabri
- Subjects
Auditory perception ,Adult ,Male ,Auditory Pathways ,media_common.quotation_subject ,Brain mapping ,Lateralization of brain function ,Functional Laterality ,Phonetics ,Perception ,Biological neural network ,Humans ,Evoked Potentials ,media_common ,Communication ,Brain Mapping ,business.industry ,General Neuroscience ,Functional specialization ,Brain ,Inferior parietal lobule ,Articles ,Central sulcus ,Auditory Perception ,Female ,Nerve Net ,business ,Psychology ,Neuroscience - Abstract
Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80–100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.
- Published
- 2013
39. Mapping phonemic processing zones along human perisylvian cortex: an electro-corticographic investigation
- Author
-
John J. Foxe, Manuel R. Mercier, Sophie Molholm, Einat Liebenthal, Theodore H. Schwartz, Walter Ritter, and Pierfilippo De Sanctis
- Subjects
Male ,Auditory perception ,Auditory Pathways ,Histology ,Adolescent ,Mismatch negativity ,Auditory cortex ,Brain mapping ,Functional Laterality ,Article ,Temporal lobe ,Young Adult ,Superior temporal gyrus ,medicine ,Humans ,Speech ,Auditory system ,Language ,Auditory Cortex ,Brain Mapping ,General Neuroscience ,Electroencephalography ,Speech processing ,Temporal Lobe ,medicine.anatomical_structure ,Acoustic Stimulation ,Auditory Perception ,Female ,Anatomy ,Psychology ,Neuroscience ,Cognitive psychology - Abstract
The auditory system is organized such that progressively more complex features are represented across successive cortical hierarchical stages. Just when and where the processing of phonemes, fundamental elements of the speech signal, is achieved in this hierarchy remains a matter of vigorous debate. Non-invasive measures of phonemic representation have been somewhat equivocal. While some studies point to a primary role for middle/anterior regions of the superior temporal gyrus (STG), others implicate the posterior STG. Differences in stimulation, task and inter-individual anatomical/functional variability may account for these discrepant findings. Here, we sought to clarify this issue by mapping phonemic representation across left perisylvian cortex, taking advantage of the excellent sampling density afforded by intracranial recordings in humans. We asked whether one or both major divisions of the STG were sensitive to phonemic transitions. The high signal-to-noise characteristics of direct intracranial recordings allowed for analysis at the individual participant level, circumventing issues of inter-individual anatomic and functional variability that may have obscured previous findings at the group level of analysis. The mismatch negativity (MMN), an electro-physiological response elicited by changes in repetitive streams of stimulation, served as our primary dependent measure. Oddball configurations of pairs of phonemes, spectro-temporally matched non-phonemes, and simple tones were presented. The loci of the MMN clearly differed as a function of stimulus type. Phoneme representation was most robust over middle/anterior STG/STS, but was also observed over posterior STG/SMG. These data point to multiple phonemic processing zones along perisylvian cortex, both anterior and posterior to primary auditory cortex. This finding is considered within the context of a dual stream model of auditory processing in which functionally distinct ventral and dorsal auditory processing pathways may be engaged by speech stimuli.
- Published
- 2013
- Full Text
- View/download PDF
40. Introduction to Brain Imaging
- Author
-
Einat Liebenthal
- Subjects
Calcarine sulcus ,medicine.anatomical_structure ,Visual cortex ,Gyrus ,business.industry ,Functional neuroimaging ,Cortex (anatomy) ,medicine ,Human brain ,business ,Neuroscience ,Brain mapping ,Neuroanatomy - Abstract
Anatomical landmarks are useful in describing the location of different functional brain regions, for example, the primary sensorimotor cortex in the pre- and post-central sulci, the primary auditory cortex on Heschl’s gyrus, and the primary visual cortex in the calcarine sulcus [1]. However, early studies examining the effects of different brain lesions on function, and the advent of neuroimaging, have shown that there is tremendous intraindividual variability in both the structure and functional organization of the human brain. Like fingerprints, each brain has a unique configuration of gyri and sulci (crests and troughs, respectively, in the surface of the brain) [2, 3]. In addition, brain function may not be specifically localized with respect to sulcal neuroanatomy, prompting the conclusion that sulci are not generally valid landmarks of the microstructural organization of the cortex [4]. In patients with brain pathology, the use of anatomical landmarks can further be jeopardized due to edema or mass effects that obliterate the structure of gyri and sulci and can induce plasticity in functional organization. These findings highlight the important role of personalized structural and functional neuroimaging, especially for clinical applications such as presurgical brain mapping. In this chapter, the chief neuroimaging methods relevant to the diagnosis and management of patients with brain tumor or epilepsy are reviewed.
- Published
- 2011
- Full Text
- View/download PDF
41. Corrigendum to 'FMRI of phonemic perception and its relationship to reading development in elementary- to middle-school-age children' [Neuroimage 89 (2014) 192–202]
- Author
-
Anjali Desai, Einat Liebenthal, Jeffrey R. Binder, and Lisa L. Conant
- Subjects
School age child ,Neurology ,Cognitive Neuroscience ,Reading (process) ,media_common.quotation_subject ,Perception ,Psychology ,Cognitive psychology ,Developmental psychology ,media_common - Published
- 2014
- Full Text
- View/download PDF
42. Tonotopic organization of human auditory cortex
- Author
-
Jeffrey R. Binder, Einat Liebenthal, and Colin Humphries
- Subjects
Auditory perception ,Adult ,Male ,Cognitive Neuroscience ,Auditory cortex ,Functional Laterality ,Article ,Superior temporal gyrus ,Young Adult ,Gyrus ,medicine ,Humans ,Auditory Cortex ,Stochastic Processes ,medicine.diagnostic_test ,Magnetic resonance imaging ,Signal Processing, Computer-Assisted ,Magnetic Resonance Imaging ,Nonhuman primate ,medicine.anatomical_structure ,Neurology ,Acoustic Stimulation ,Auditory Perception ,Female ,Tonotopy ,Functional magnetic resonance imaging ,Psychology ,Neuroscience - Abstract
The organization of tonotopic fields in human auditory cortex was investigated using functional magnetic resonance imaging. Subjects were presented with stochastically alternating multi-tone sequences in six different frequency bands, centered at 200, 400, 800, 1600, 3200, and 6400 Hz. Two mirror-symmetric frequency gradients were found extending along an anterior-posterior axis from a zone on the lateral aspect of Heschl's gyrus (HG), which responds preferentially to lower frequencies, toward zones posterior and anterior to HG that are sensitive to higher frequencies. The orientation of these two principal gradients is thus roughly perpendicular to HG, rather than parallel as previously assumed. A third, smaller gradient was observed in the lateral posterior aspect of the superior temporal gyrus. The results suggest close homologies between the tonotopic organization of human and nonhuman primate auditory cortex.
- Published
- 2009
43. Attentional Modulation in the Detection of Irrelevant Deviance: A Simultaneous ERP/fMRI Study
- Author
-
Merav Sabri, Jeffrey R. Binder, Einat Liebenthal, Eric J. Waldron, and David A. Medler
- Subjects
Auditory perception ,Adult ,Male ,medicine.medical_specialty ,Signal Detection, Psychological ,genetic structures ,Cognitive Neuroscience ,Mismatch negativity ,Contingent Negative Variation ,Audiology ,Electroencephalography ,Brain mapping ,behavioral disciplines and activities ,Article ,Developmental psychology ,P3a ,medicine ,Image Processing, Computer-Assisted ,Reaction Time ,Humans ,Attention ,Analysis of Variance ,Brain Mapping ,medicine.diagnostic_test ,Brain ,Superior temporal sulcus ,Magnetic Resonance Imaging ,Contingent negative variation ,Oxygen ,Acoustic Stimulation ,Auditory Perception ,Evoked Potentials, Auditory ,Female ,Functional magnetic resonance imaging ,Psychology ,psychological phenomena and processes - Abstract
Little is known about the neural mechanisms that control attentional modulation of deviance detection in the auditory modality. In this study, we manipulated the difficulty of a primary task to test the relation between task difficulty and the detection of infrequent, task-irrelevant deviant (D) tones (1300 Hz) presented among repetitive standard (S) tones (1000 Hz). Simultaneous functional magnetic resonance imaging (fMRI)/event-related potentials (ERPs) were recorded from 21 subjects performing a two-alternative forced-choice duration discrimination task (short and long tones of equal probability). The duration of the short tone was always 50 msec. The duration of the long tone was 100 msec in the easy task and 60 msec in the difficult task. As expected, response accuracy decreased and response time (RT) increased in the difficult compared with the easy task. Performance was also poorer for D than for S tones, indicating distraction by task-irrelevant frequency information on trials involving D tones. In the difficult task, an amplitude increase was observed in the difference waves for N1 and P3a, ERP components associated with increased attention to deviant sounds. The mismatch negativity (MMN) response, associated with passive deviant detection, was larger in the easy task, demonstrating the susceptibility of this component to attentional manipulations. The fMRI contrast D > S in the difficult task revealed activation on the right superior temporal gyrus (STG) and extending ventrally into the superior temporal sulcus, suggesting this region's involvement in involuntary attention shifting toward unattended, infrequent sounds. Conversely, passive deviance detection, as reflected by the MMN, was associated with more dorsal activation on the STG. These results are consistent with the view that the dorsal STG region is responsive to mismatches between the memory trace of the standard and the incoming deviant sound, whereas the ventral STG region is activated by involuntary shifts of attention to task-irrelevant auditory features.
- Published
- 2006
44. Tuning of the human left fusiform gyrus to sublexical orthographic structure
- Author
-
Einat Liebenthal, Jeffrey R. Binder, Lori Buchanan, Chris Westbury, and David A. Medler
- Subjects
Adult ,Male ,genetic structures ,Cognitive Neuroscience ,Brain mapping ,Gyrus Cinguli ,Functional Laterality ,Article ,Reference Values ,medicine ,Image Processing, Computer-Assisted ,Humans ,Visual word form area ,Brain Mapping ,Fusiform gyrus ,medicine.diagnostic_test ,Word superiority effect ,English orthography ,Neuropsychology ,Fusiform face area ,Middle Aged ,Magnetic Resonance Imaging ,Oxygen ,Neurology ,Cerebrovascular Circulation ,Educational Status ,Female ,Psychology ,Functional magnetic resonance imaging ,Cognitive psychology - Abstract
Neuropsychological and neurophysiological evidence point to a role for the left fusiform gyrus in visual word recognition, but the specific nature of this role remains a topic of debate. The aim of this study was to measure the sensitivity of this region to sublexical orthographic structure. We measured blood oxygenation (BOLD) changes in the brain with functional magnetic resonance imaging while fluent readers of English viewed meaningless letter strings. The stimuli varied systematically in their approximation to English orthography, as measured by the probability of occurrence of letters and sequential letter pairs (bigrams) comprising the string. A whole-brain analysis showed a single region in the lateral left fusiform gyrus where BOLD signal increased with letter sequence probability; no other brain region showed this response pattern. The results suggest tuning of this cortical area to letter probabilities as a result of perceptual experience and provide a possible neural correlate for the ‘word superiority effect’ observed in letter perception research.
- Published
- 2006
45. Volumetric vs. surface-based alignment for localization of auditory cortex activation
- Author
-
Eric J. Waldron, Rutvik H. Desai, E. T. Possing, Jeffrey R. Binder, and Einat Liebenthal
- Subjects
Auditory perception ,Adult ,Male ,Cognitive Neuroscience ,Models, Neurological ,Auditory cortex ,Lateralization of brain function ,Superior temporal gyrus ,Gyrus ,Functional neuroimaging ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Computer vision ,Auditory Cortex ,Brain Mapping ,business.industry ,Superior temporal sulcus ,Human brain ,Middle Aged ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Neurology ,Acoustic Stimulation ,Auditory Perception ,Female ,Artificial intelligence ,Psychology ,business ,Psychomotor Performance - Abstract
The high degree of intersubject structural variability in the human brain is an obstacle in combining data across subjects in functional neuroimaging experiments. A common method for aligning individual data is normalization into standard 3D stereotaxic space. Since the inherent geometry of the cortex is that of a 2D sheet, higher precision can potentially be achieved if the intersubject alignment is based on landmarks in this 2D space. To examine the potential advantage of surface-based alignment for localization of auditory cortex activation, and to obtain high-resolution maps of areas activated by speech sounds, fMRI data were analyzed from the left hemisphere of subjects tested with phoneme and tone discrimination tasks. We compared Talairach stereotaxic normalization with two surface-based methods: Landmark Based Warping, in which landmarks in the auditory cortex were chosen manually, and Automated Spherical Warping, in which hemispheres were aligned automatically based on spherical representations of individual and average brains. Examination of group maps generated with these alignment methods revealed superiority of the surface-based alignment in providing precise localization of functional foci and in avoiding mis-registration due to intersubject anatomical variability. Human left hemisphere cortical areas engaged in complex auditory perception appear to lie on the superior temporal gyrus, the dorsal bank of the superior temporal sulcus, and the lateral third of Heschl's gyrus.
- Published
- 2004
46. Ballistocardiogram artifact reduction in the simultaneous acquisition of auditory ERPS and fMRI
- Author
-
Thomas E. Prieto, Kristina M. Ropella, Einat Liebenthal, Marianna V. Spanaki, Jeffrey R. Binder, and Michael L. Ellingson
- Subjects
Artifact (error) ,genetic structures ,medicine.diagnostic_test ,Computer science ,Cognitive Neuroscience ,Speech recognition ,Electroencephalography ,Signal Processing, Computer-Assisted ,EEG-fMRI ,Signal ,Magnetic Resonance Imaging ,Ballistocardiography ,Electrocardiography ,Neurology ,Event-related potential ,medicine ,Evoked Potentials, Auditory ,Image Processing, Computer-Assisted ,Humans ,Signal averaging ,Functional magnetic resonance imaging ,Artifacts ,Electrodes ,Algorithms - Abstract
Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are now being combined to analyze brain function. Confounding the EEG signal acquired in the MR environment is a ballistocardiogram artifact (BA), which is predominantly caused by cardiac-related body movement. The objective of this study was to develop and evaluate a method for reducing these MR-induced artifacts to retrieve small auditory event-related potentials (ERPs) from EEG recorded during fMRI. An algorithm for BA reduction was developed that relies on timing information obtained from simultaneous electrocardiogram (ECG) recordings and subsequent creation of an adaptive BA template. The BA template is formed by median-filtering 10 consecutive BA events in the EEG signal. The continuously updated template is then subtracted from each BA in the EEG. The auditory ERPs are obtained through signal averaging of the remaining EEG signal. Experimental and simulated ERP data were estimated to assess effectiveness of the BA reduction. Simulation showed that the algorithm reduced BA without significantly altering the morphology of a signal periodically inserted in the EEG. Auditory ERP data, obtained in a 1.5-T scanner during a passive auditory oddball paradigm and processed with the BA reduction algorithm, were comparable to data recorded in a mock scanner outside the magnetic field with the same experimental paradigm. It is concluded that through adequate reduction of the BA, relatively small auditory ERPs can be acquired in the MR environment.
- Published
- 2003
47. Reduction of ballistocardiogram artifact in the simultaneous acquisition of auditory event-related potentials and functional magnetic resonance images
- Author
-
Thomas E. Prieto, Michael L. Ellingson, Jeffrey R. Binder, Kristina M. Ropella, Einat Liebenthal, and Marianna V. Spanaki
- Subjects
Artifact (error) ,genetic structures ,medicine.diagnostic_test ,Computer science ,Auditory event ,Speech recognition ,Mismatch negativity ,Magnetic resonance imaging ,Electroencephalography ,EEG-fMRI ,behavioral disciplines and activities ,Signal ,Adaptive filter ,medicine ,Median filter ,Functional magnetic resonance imaging - Abstract
The combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) is now being used to analyze brain function. However, the ballistocardiogram artifact (BA), which is caused by the pulsatile motion related to the heart beat, dominates the EEG signal in the MR environment. The objective of this research is to reduce these MR-induced artifacts in order to extract useful information from the EEG recordings, specifically auditory event-related potentials (ERPs). The BA was reduced through the use of an algorithm which creates an adaptive BA template by median-filtering the previous 10 artifacts in the EEG signal. The continuously updated template is then subtracted from each BA in the EEG signal. Results are reported for five subjects, and it was found that the results for the mismatch negativities (MMNs) were comparable to those that have previously been reported for studies using auditory frequency deviants. The methods used to reduce the BA in this study demonstrate that the acquisition of auditory ERPs along with the subsequent MMN component can validly be obtained in the MR-environment.
- Published
- 2003
- Full Text
- View/download PDF
48. Short-term reorganization of auditory analysis induced by phonetic experience
- Author
-
Jeffrey R. Binder, Einat Liebenthal, Robert E. Remez, and Rebecca L. Piorkowski
- Subjects
Adult ,Male ,medicine.medical_specialty ,Adolescent ,Cognitive Neuroscience ,media_common.quotation_subject ,Audiology ,Auditory cortex ,behavioral disciplines and activities ,Tone (musical instrument) ,Sine wave ,Phonetics ,Perception ,otorhinolaryngologic diseases ,medicine ,Confidence Intervals ,Natural (music) ,Humans ,media_common ,Communication ,Analysis of Variance ,Brain Mapping ,business.industry ,Cognition ,Middle Aged ,Magnetic Resonance Imaging ,Formant ,Auditory Perception ,Speech Perception ,Female ,Psychology ,business ,psychological phenomena and processes - Abstract
Sine wave replicas of spoken words can be perceived both as nonphonetic auditory forms and as words, depending on a listener's experience. In this study, brain areas activated by sine wave words were studied with fMRI in two conditions: when subjects perceived the sounds spontaneously as nonphonetic auditory forms (“naïve condition”) and after instruction and brief practice attending to their phonetic attributes (“informed condition”). The test items were composed such that half replicated natural words (“phonetic items”) and the other half did not, because the tone analogs of the first and third formants had been temporally reversed (“nonphonetic items”). Subjects were asked to decide whether an isolated tone analog of the second formant (T2) presented before the sine wave word (T1234) was included in it. Experience in attending to the phonetic properties of the sinusoids interfered with this auditory matching task and was accompanied by a decrease in auditory cortex activation with word replicas but not with the acoustically matched nonphonetic items. Because the activation patterns elicited by equivalent acoustic test items depended on a listener's awareness of their phonetic potential, this indicates that the analysis of speech sounds in the auditory cortex is distinct from the simple resolution of auditory form, and is not a mere consequence of acoustic complexity. Because arbitrary acoustic patterns did not evoke the response observed for phonetic patterns, these findings suggest that the perception of speech is contingent on the presence of familiar patterns of spectral variation. The results are consistent with a short-term functional reorganization of auditory analysis induced by phonetic experience with sine wave replicas and contingent on the dynamic acoustic structure of speech.
- Published
- 2003
49. Evidence for primary auditory cortex involvement in the echo suppression precedence effect: a 3CLT study
- Author
-
Einat Liebenthal and Hillel Pratt
- Subjects
Pharmacology ,Physics ,Auditory Cortex ,medicine.medical_specialty ,Physiology ,Echo (computing) ,Auditory Threshold ,General Medicine ,Audiology ,Auditory cortex ,Lateralization of brain function ,Temporal Lobe ,Acoustic Stimulation ,Precedence effect ,Drug Discovery ,medicine ,Evoked Potentials, Auditory, Brain Stem ,Humans ,Sound Localization ,Binaural recording - Abstract
An echo lagging shortly after a source and arising from another direction perceptually blends with the source, and the location of the fused 'source-echo' is dominated by the source location (the Precedence Effect). The neural substrates underlying the echo localization suppression are ambiguous. We recently suggested an auditory evoked potentials correlate of binaural echo lateralization suppression. A significant and specific reduction in binaural peak amplitude and area of the echo-evoked middle-latency component Pa was observed. The binaural echo-Pa suppression depended on echo lag and correlated with the psychophysical echo lateralization suppression. In this study, the echo-Pa generators were analyzed with 3CLT spatio-temporal analysis, in order to suggest the neural substrates involved in echo lateralization suppression. 3CLT enables reliable identification of components, based on rigid geometrical properties. The results suggest that the Pa1 subcomponent of Pa, associated with primary auditory cortex activity, fully accounts for the echo-Pa suppression. This physiological indication for primary auditory cortex involvement in the precedence effect is the first in humans.
- Published
- 1997
50. Sinewave speech/ nonspeech perception: An fMRI study
- Author
-
Rebecca L. Piorkowski, Robert E. Remez, Jeffrey R. Binder, and Einat Liebenthal
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,medicine.diagnostic_test ,Acoustics ,media_common.quotation_subject ,Stimulus (physiology) ,Audiology ,Auditory cortex ,Sine wave ,medicine.anatomical_structure ,Formant ,Arts and Humanities (miscellaneous) ,Gyrus ,Perception ,medicine ,Functional magnetic resonance imaging ,Psychology ,Vocal tract ,media_common - Abstract
Sinewave speech replicas follow the center frequency and amplitude of vocal tract resonances of natural speech but lack the attributes of vocal chord vibrations. They can be perceived as speech or nonspeech depending on expectation (Remez et al., 1981). Brain areas activated by sinewave replicas when perceived as nonspeech or as speech were compared with functional magnetic resonance imaging. In an auditory task, 30 nave subjects determined whether isolated second‐tone formants were included in a following three‐tones sinewave complex. The second‐tone formant was either aligned (phonetic stimulus) or temporally reversed relative to the other tones in the complex (auditory stimulus). Halfway through the auditory task, subjects were informed about the phonetic derivation of the stimuli and trained to recognize the original speech. In a subsequent phonetic task, subjects identified sinewave stimuli containing the phoneme /p/. Following training, an increase in behavioral response time, concurrent with a decrease in left Heschl’s gyrus activation was observed specifically with the phonetic stimuli. This suggests automatic recognition of speech structure in the auditory cortex and interference with auditory processing of phonetic stimuli, acquired by training. In the phonetic task, the left inferior frontal gyrus was specifically activated implicating this area in phonetic processing.
- Published
- 2001
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.