279 results on '"R. Leek"'
Search Results
2. Additive effects of aging and blast induced mild traumatic brain injury within white matter tracts: A novel DTI analysis approach
- Author
-
Oren Poliva, Christian Herrera, Kelli Sugai, Nicole Whittle, Marjorie R. Leek, Alex C. Yi, Sam Barnes, Barbara Holshouser, and Jonathan Henry Venezia
- Abstract
Veterans of recent military conflicts have experienced a high rate of mild traumatic brain injuries from exposure to blasts (bTBI). Difficulty detecting the neuroanatomical effects of bTBI using standard imaging protocols, including diffusion tensor imaging (DTI), has hindered the development of evidence-based treatments. A possible reason for this challenge is that many past DTI studies attempting to identify neuroanatomical markers of bTBI have ignored the broad range of cumulative blast exposure among Veterans, and therefore potentially reduced sensitivity to associations between bTBI and DTI metrics. Here, we compare commonly used DTI metrics: fractional anisotropy and mean, axial, and radial diffusivity (FA, MD, AD, RD) in U.S. Military Veterans with and without a history of blast exposure using both the traditional method of dividing participants into two equally weighted groups, and an alternative method, wherein each participant is weighted by their blast exposure quantity, severity, and recency. While no differences in FA, MD, and AD (and minimal in RD) were detected using the traditional method, the alternative method revealed diffuse and extensive changes in all four DTI metrics associated with bTBI. These effects were quantified within 80 anatomically-defined white matter tracts as the percentage of voxels with significant changes, which identified the acoustic and optic radiations, fornix, uncinate fasciculus, inferior occipito-frontal fasciculus, cingulum, and the anterior commissure as the pathways most affected by bTBI. Moreover, additive effects of aging were present in many of the same tracts suggesting that the neuroanatomical effects of bTBI may compound with age.
- Published
- 2023
3. Effects of combined gentamicin and furosemide treatment on cochlear ribbon synapses
- Author
-
Jessica G. Gonzalez, Alisa P. Hetrick, Glen K. Martin, Liana Sargsyan, Hongzhe Li, and Marjorie R. Leek
- Subjects
Male ,Ribbon synapse ,Toxicology ,Mice ,03 medical and health sciences ,0302 clinical medicine ,Furosemide ,otorhinolaryngologic diseases ,medicine ,Animals ,Cochlea ,030304 developmental biology ,0303 health sciences ,Neuronal Plasticity ,Dose-Response Relationship, Drug ,Chemistry ,General Neuroscience ,Aminoglycoside ,medicine.disease ,Anti-Bacterial Agents ,Mice, Inbred C57BL ,Drug Combinations ,medicine.anatomical_structure ,Synapses ,Synaptic plasticity ,Toxicity ,Biophysics ,Female ,Synaptopathy ,sense organs ,Hair cell ,Gentamicins ,030217 neurology & neurosurgery ,medicine.drug - Abstract
It is well-established that aminoglycoside antibiotics are ototoxic, and the toxicity can be drastically enhanced by the addition of loop diuretics, resulting in rapid irreversible hair cell damage. Using both electrophysiologic and morphological approaches, we investigated whether this combined treatment affected the cochlea at the region of ribbon synapses, consequently resulting in auditory synaptopathy. A series of varied gentamicin and furosemide doses were applied to C57BL/6 mice, and auditory brainstem responses (ABR) and distortion product otoacoustic emissions (DPOAE) were measured to assess ototoxic damage within the cochlea. In brief, the treatment effectively induced cochlear damage and promoted a certain reorganization of synaptic ribbons, while a reduction of ribbon density only occurred after a substantial loss of outer hair cells. In addition, both the ABR wave I amplitude and the ribbon density were elevated in low-dose treatment conditions, but a correlation between the two events was not significant for individual cochleae. In sum, combined gentamicin and furosemide treatment, at titrated doses below those that produce hair cell damage, typically triggers synaptic plasticity rather than a permanent synaptic loss.
- Published
- 2021
4. High-level speaker verification with support vector machines.
- Author
-
William M. Campbell, Joseph P. Campbell, Douglas A. Reynolds, Douglas A. Jones, and Timothy R. Leek
- Published
- 2004
- Full Text
- View/download PDF
5. Phonetic Speaker Recognition with Support Vector Machines.
- Author
-
William M. Campbell, Joseph P. Campbell, Douglas A. Reynolds, Douglas A. Jones, and Timothy R. Leek
- Published
- 2003
6. Suprathreshold Differences in Competing Speech Perception in Older Listeners With Normal and Impaired Hearing
- Author
-
Marjorie R. Leek, Michael P. Lindeman, and Jonathan H. Venezia
- Subjects
Male ,Auditory perception ,Linguistics and Language ,medicine.medical_specialty ,Speech perception ,Hearing Loss, Sensorineural ,Hearing Tests ,Vulnerability ,Auditory Threshold ,Phonetics ,Cognition ,Audiology ,Language and Linguistics ,Speech and Hearing ,Hearing ,Listening comprehension ,Speech Perception ,medicine ,Humans ,Female ,Speech communication ,Hearing Loss ,Psychology ,Perceptual Masking ,Aged - Abstract
Purpose Age-related declines in auditory temporal processing and cognition make older listeners vulnerable to interference from competing speech. This vulnerability may be increased in older listeners with sensorineural hearing loss due to additional effects of spectral distortion and accelerated cognitive decline. The goal of this study was to uncover differences between older hearing-impaired (OHI) listeners and older normal-hearing (ONH) listeners in the perceptual encoding of competing speech signals. Method Age-matched groups of 10 OHI and 10 ONH listeners performed the coordinate response measure task with a synthetic female target talker and a male competing talker at a target-to-masker ratio of +3 dB. Individualized gain was provided to OHI listeners. Each listener completed 50 baseline and 800 “bubbles” trials in which randomly selected segments of the speech modulation power spectrum (MPS) were retained on each trial while the remainder was filtered out. Average performance was fixed at 50% correct by adapting the number of segments retained. Multinomial regression was used to estimate weights showing the regions of the MPS associated with performance (a “classification image” or CImg). Results The CImg weights were significantly different between the groups in two MPS regions: a region encoding the shared phonetic content of the two talkers and a region encoding the competing (male) talker's voice. The OHI listeners demonstrated poorer encoding of the phonetic content and increased vulnerability to interference from the competing talker. Individual differences in CImg weights explained over 75% of the variance in baseline performance in the OHI listeners, whereas differences in high-frequency pure-tone thresholds explained only 10%. Conclusion Suprathreshold deficits in the encoding of low- to mid-frequency (~5–10 Hz) temporal modulations—which may reflect poorer “dip listening”—and auditory grouping at a perceptual and/or cognitive level are responsible for the relatively poor performance of OHI versus ONH listeners on a different-gender competing speech task. Supplemental Material https://doi.org/10.23641/asha.12568472
- Published
- 2020
7. Encoding of vocal pitch in the dorsal premotor cortex during multi-talker speech recognition
- Author
-
Jonathan Henry Venezia, Christian Herrera Ortiz, Nicole Whittle, Marjorie R. Leek, Samuel Barnes, Barbara Holshouser, and Alex C. Yi
- Abstract
In a recent study (Venezia et al., 2021), left dorsal premotor cortex (dPM) responded to vocal pitch during a degraded speech recognition task, but only when speech was rated as unintelligible. Crucially, vocal pitch was not relevant to the task. The present fMRI study (N = 25) tests the hypothesis that left dPM will respond to vocal pitch for increasingly intelligible speech in a multi-talker speech recognition task that emphasizes pitch for talker segregation. We applied spectrotemporal modulation distortion to independently modulate vocal pitch and phonetic content in two-talker (male/female) utterances across two conditions (Competing, Unison), only one of which required pitch-based segregation (Competing). A Bayesian hierarchical drift-diffusion model (HDDM) was used to predict speech recognition performance (3-AFC response times, accuracy coded) from the pattern of spectrotemporal distortion imposed on each trial. The model’s drift rate parameter, a d’-like measure of speech recognition performance, was strongly associated with vocal pitch for Competing but not Unison. In a second, Bayesian hierarchical brain-behavior model, we then regressed the HDDM’s posterior predictions of trial-wise drift rate against trial-wise fMRI activation amplitude. A significant positive association with overall drift rate, reflecting contributions from vocal pitch and/or phonetic content, was observed in left dPM in both conditions. A significant positive association with ‘pitch-restricted’ drift rate, reflecting only contributions from vocal pitch, was observed in left dPM but only in the Competing condition. These findings suggest that left dPM: (i) responds to vocal pitch; and (ii) can operate in an auditory-pitch mode and a phonetic-speech mode.
- Published
- 2021
8. Estimating the contribution of central noise from composite performance across multiple tasks
- Author
-
Jonathan H. Venezia, Nicole Whittle, Marjorie R. Leek, and Christian Herrera Ortiz
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
According to signal detection theory, the ability to detect a signal is limited only by internal noise, which comprises peripheral and central sources. Here, we develop a statistical approach to parse central from peripheral noise. Fifty-two Veterans (mean age = 47.8, range = 30–60) with normal or near-normal hearing performed AXB discrimination for several temporal processing tasks: gap duration discrimination, forward masking, frequency modulation detection, and interaural phase modulation detection. After training, a single adaptive run (40 reversals) was completed for each task. Subjects also completed speech-in-noise testing (“Theo-Victor-Michael") with four masker types (48 trials ea.): speech-shaped noise, speech-envelope modulated noise, one and two competing talkers. Composite speech performance was estimated using principal component analysis. Bayesian hierarchical regression was used to estimate two-parameter psychometric functions (threshold, slope) simultaneously for all temporal tasks and subjects. Crucially, fixed (group-level) thresholds were estimated per task but only a single random (subject-level) intercept was estimated (mean across-task deviation from the group thresholds). We assume central noise is the primary factor limiting across-task performance. The principal speech scores were entered as regressors on this “central threshold.” Indeed, central threshold was correlated with the principal speech scores, suggesting that central noise limits temporal processing and speech-in-noise.
- Published
- 2022
9. Envelope following responses following partial cochlear deafferentation in guinea pigs
- Author
-
JoAnn McGee, Xiaohui Lin, Ashley Vazquez, Hongzhe Li, Jonathan H. Venezia, Marjorie R. Leek, and Edward J. Walsh
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
The sound-induced loss of ribbon synapses connecting inner hair cells to auditory nerve fibers primarily exhibiting high thresholds has been the subject of numerous investigations. The condition resulting from this inner ear abnormality is commonly referred to as hidden hearing loss, a name that reflects the relative invulnerability of low threshold auditory nerve fibers that survive noise-exposure and function normally. Although auditory sensitivity recovers completely among animals experiencing hidden hearing loss, recovery of auditory brainstem response amplitudes is incomplete. The residual loss of function reflected in diminished response amplitudes to transient stimuli serves as a highly reliable indicator of the condition. The extent to which responses to sustained stimuli might serve as indicators of pathology is less clear. To that end, we will review findings related to differences in spectral magnitudes of envelope following responses to a battery of sinusoidally-amplitude modulated test conditions that include level dependent response growth, modulation transfer functions under a variety of carrier conditions, the influence of varying modulation depths, as well as the influence of maskers on responses acquired from control and noise-exposed animals. [Work supported by the Department of Defense Award #W81XWH-19-1-0862.]
- Published
- 2022
10. Relation between temporal fine structure processing and global processing speed
- Author
-
Kelli Sugai, Nicole Whittle, Christian Herrera Ortiz, Marjorie R. Leek, Grace Lee, and Jonathan H. Venezia
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Strelcyk et al. (2019) recently found that interaural phase discrimination in older hearing-impaired listeners was correlated with both visuospatial processing speed and interaural level discrimination. This suggests that temporal fine structure (TFS) processing relies on global processing speed and/or spatial cognition, though it is possible that, generally, complex auditory discrimination engages multiple cognitive domains. Here, 50 Veterans (mean age = 48.1, range = 30–60) with normal or near-normal hearing completed batteries of temporal processing and cognitive tests. Composite cognitive test scores reflecting processing speed/executive function (PS-EX) and working memory (WM) were obtained. Temporal processing tasks included measures of envelope (ENV; gap duration discrimination, forward masking) and TFS (frequency modulation detection, interaural phase modulation detection) processing. Bayesian hierarchical regression was used to fit psychometric functions simultaneously to all ENV and TFS tasks in all subjects. Fixed effects of PS-EX and WM on thresholds and slopes were estimated for the psychometric functions in each temporal task. In general, (i) PS-EX and WM influenced both TFS and ENV thresholds, but not slopes; and (ii) TFS thresholds were best explained by PS-EX scores while ENV thresholds were best explained by WM scores. These findings suggest a specific relation between global processing speed and TFS.
- Published
- 2022
11. Role of superior temporal gyrus and planum temporale in talker segregation
- Author
-
Christian Herrera Ortiz, Nicole Whittle, Marjorie R. Leek, Samuel Barnes, Barbara Holshouser, Alex Yi, and Jonathan H. Venezia
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Recent studies suggest the brain tracks both attended and unattended speech streams. Here, we describe the cortical mechanisms that support active talker segregation by vocal gender. Thirty-three participants with normal or near-normal hearing performed a competing speech task during fMRI scanning. The target (competing) talker was female (male). Spectrotemporal modulation filtering was applied to stochastically modulate female and male vocal pitch across trials. Using the modulation-filter patterns as predictors, spectrotemporal receptive fields (STRFs) were obtained at each voxel using coordinate descent. STRF weights associated with female- (∼6 cyc/kHz) and male-talker (∼12 cyc/kHz) pitch were analyzed across subjects to identify pitch-sensitive voxels (logical OR, corrected p
- Published
- 2022
12. Cortical Networks for Recognition of Speech with Simultaneous Talkers
- Author
-
Christian Herrera Ortiz, Nicole Whittle, Marjorie R. Leek, Christian Brodbeck, Grace Lee, Caleb Barcenas, Samuel Barnes, Barbara Holshouser, Alex C. Yi, and Jonathan Henry Venezia
- Abstract
The relative contributions of superior temporal (auditory) vs. inferior frontal and parietal (sensorimotor) networks to recognition of speech against competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal receptive field (STRF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. We also generate ‘neurometric functions’ that describe the relative contributions of these networks to speech recognition performance. Specifically, 25 listeners completed two versions of a 3-Alternative Forced-Choice (3-AFC) competing speech task: “Unison” and “Competing”, in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering was applied to the two-talker mixtures and a “boosting” procedure was used to generate STRF models to predict brain activation from differences in spectrotemporal distortion on each trial. STRF model predictive accuracy was better for Competing than Unison in a bilateral temporal lobe network, and better for Unison than Competing in a large network of frontoparietal and midline brain regions. Agglomerative STRF clustering further revealed three subnetworks: a bilateral superior temporal Intelligibility network, a frontoparietal Distortion network, and a Semantic network distributed across classic semantic memory regions. The Intelligibility and Semantic networks responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, while the Distortion network responded to the absence of such cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Neurometric function analysis showed that: (i) activation in the Intelligibility network was strongly positively correlated with behavioral performance and that this relation was entirely STRF-mediated; and (ii) activation in the Distortion network was strongly negatively correlated with performance and this relation was only partially STRF-mediated. The contributions to performance from these networks were partially independent and of roughly equal magnitude. Finally, activation in the Semantic network was weakly positively correlated with performance and this relation was entirely superseded by those in the Intelligibility and Distortion networks. We conclude: (a) superior temporal regions play a bottom-up, perceptual role in competing speech tasks; (b) frontoparietal regions play a top-down, task-dependent role in competing speech tasks that scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with additional contributions from semantic regions that likely scale with the semantic predictability of the speech material.
- Published
- 2021
13. Case Study from Industry: Process Modeling in Resin Transfer Molding as a Method to Enhance Product Quality.
- Author
-
Wing K. Chui, James Glimm, Folkert M. Tangerman, A. P. Jardine, J. S. Madsen, T. M. Donnellan, and R. Leek
- Published
- 1997
- Full Text
- View/download PDF
14. A guinea pig model of hidden hearing loss: Prelude to the development of a human model
- Author
-
Joann McGee, Xiaohui Lin, Ashley Vazquez, Hongzhe Li, Marjorie R. Leek, Jonathan H. Venezia, Nicole Whittle, and Edward J. Walsh
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Cochlear synaptopathy, otherwise known as hidden hearing loss, has been at least partially characterized in a number of mammalian species. Although well-established generally, the disorder is incompletely understood, particularly with regard to the extent of interspecies pathology associated with the condition. Furthermore, the extent to which recovery of lost function is achieved among species thus far studied is also incompletely understood. In this context, the existence of evidence suggesting that humans experience this form of synapse pathology calls for the development of a noninvasive protocol designed to further address the question. If confirmed, where on the disorder spectrum humans are positioned takes on a heightened sense of importance. To that end, a protocol that reliably identifies the disorder in a guinea pig model is under development and will, if successful, serve as the foundation for the development of an equivalent human protocol employing the same noninvasive electrophysiological strategy. In this report, preliminary findings from an ongoing study centered on a battery of electrophysiology and immunohistochemical studies supporting model development in guinea pigs will be reviewed with the goal of identifying response outcomes with diagnostic potential. [Work supported by the Department of Defense Award No. W81XWH-19-1-0862.]
- Published
- 2022
15. ¬¬Integration of Quantification of Margins and Uncertainties Methodology into Parallel Discrete Event Simulator Framework
- Author
-
James R. Leek
- Subjects
Computer science ,Event (relativity) ,Simulation - Published
- 2020
16. Sentence perception in noise by hearing-aid users predicted by syllable-constituent perception and the use of context
- Author
-
Sandra Gordon-Salant, James D. Miller, Judy R. Dubno, Marjorie R. Leek, David J. Wark, Charles S. Watson, Pamela E. Souza, and Jayne B. Ahlstrom
- Subjects
Hearing aid ,Speech Communication ,Male ,Speech perception ,Acoustics and Ultrasonics ,medicine.medical_treatment ,Speech recognition ,media_common.quotation_subject ,Context (language use) ,Context theory ,Hearing Aids ,Arts and Humanities (miscellaneous) ,Phonetics ,Perception ,medicine ,Humans ,media_common ,Aged ,Aged, 80 and over ,Recognition, Psychology ,Middle Aged ,Persons With Hearing Impairments ,Acoustic Stimulation ,QUIET ,Speech Perception ,Female ,Syllable ,Psychology ,Noise ,Perceptual Masking ,Sentence - Abstract
Masked sentence perception by hearing-aid users is strongly correlated with three variables: (1) the ability to hear phonetic details as estimated by the identification of syllable constituents in quiet or in noise; (2) the ability to use situational context that is extrinsic to the speech signal; and (3) the ability to use inherent context provided by the speech signal itself. This approach is called "the syllable-constituent, contextual theory of speech perception" and is supported by the performance of 57 hearing-aid users in the identification of 109 syllable constituents presented in a background of 12-talker babble and the identification of words in naturally spoken sentences presented in the same babble. A simple mathematical model, inspired in large part by Boothroyd and Nittrouer [(1988). J. Acoust. Soc. Am. 84, 101-114] and Fletcher [Allen (1996) J. Acoust. Soc. Am. 99, 1825-1834], predicts sentence perception from listeners' abilities to recognize isolated syllable constituents and to benefit from context. When the identification accuracy of syllable constituents is greater than about 55%, individual differences in context utilization play a minor role in determining the sentence scores. As syllable-constituent scores fall below 55%, individual differences in context utilization play an increasingly greater role in determining sentence scores. Implications for hearing-aid design goals and fitting procedures are discussed.
- Published
- 2020
17. Effects of word frequency and phonological neighborhood density on speech recognition in background noise and competing speech
- Author
-
Marjorie R. Leek, Nicole Whittle, Jonathan H. Venezia, Christian Herrera Ortiz, Mark Jenkins, and Jerome Heidrich
- Subjects
Constraint (information theory) ,Background noise ,Word lists by frequency ,Noise ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Speech recognition ,Noun ,Multinomial model ,Syllable ,Word (computer architecture) ,Mathematics - Abstract
Clinical speech-in-noise tests typically use materials without contextual constraint or balanced for linguistic properties like word/phoneme frequency. However, real-world linguistic context effects can be substantial and vary by listener and scenario. Here, 38 participants completed the Theo-Victor-Michael (TVM) speech test in four types of background: speech shaped noise (SSN), speech-envelope modulated noise (envSSN), one competing talker (1T), and two competing talkers (2T) (Helfer and Freyman, 2009). The TVM is a matrix test using keywords from a corpus of one- and two- syllable nouns that vary considerably in word frequency (FREQ) and phonological neighborhood density (DENS). Bayesian logistic regression was used to estimate the effects of FREQ/DENS on TVM performance. A multinomial model was used for 1T/2T to assess reporting of target and distractor keywords. Overall, percent-correct recognition increased with increasing keyword FREQ and decreased with increasing keyword DENS. Effects were larger in SSN/envSSN than 1T/2T. Statistically significant but small effects of FREQ/DENS were observed on distractor responses in 1T/2T. Adjusting performance for FREQ/DENS substantially shifted the distribution of scores but only for SSN/envSSN. Performance in 1T/2T may be dominated by non-linguistic factors, and/or less sensitive to FREQ/DENS due to higher difficulty or linguistic competition from the background talkers. [Work supported by VA RR&D Service.]
- Published
- 2021
18. An auditory-like region in the motor cortex
- Author
-
Christian Herrera Ortiz, Nicole Whittle, Jonathan H. Venezia, Marjorie R. Leek, Barbara A. Holshouser, Samuel Barnes, and Alex Yi
- Subjects
medicine.anatomical_structure ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,medicine ,Biology ,Neuroscience ,Motor cortex - Published
- 2021
19. Prediction of speech recognition in background noise and competing speech from suprathreshold auditory and cognitive measures
- Author
-
Jonathan H. Venezia, Grace J. Lee, Caleb Barcenas, Christian Herrera Ortiz, Marjorie R. Leek, and Nicole Whittle
- Subjects
Background noise ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Speech recognition ,Cognition ,Psychology - Published
- 2021
20. Beyond Information Literacy: Rethinking Approaches to the College Public Speaking Curriculum
- Author
-
Carl J. Brown and Danielle R. Leek
- Subjects
Public speaking ,Scholarship ,Information literacy ,Best practice ,ComputingMilieux_COMPUTERSANDEDUCATION ,Learning theory ,Mathematics education ,Rubric ,Psychology ,Library instruction ,Curriculum - Abstract
Purpose – The purpose of this chapter is to assess the avenues through which traditional notions of information literacy skills shape oral communication curriculum and to identify steps that can be taken to transform the experience of students in the public speaking classroom so that they are offered an opportunity to develop understandings of how they use information to learn.Approach – This chapter engages in an analysis of teaching materials and best practice scholarship used in the traditional college public speaking classroom. An informed learning perspective is applied to this corpus to identify the ways in which an information literacy skills approach is reflected in current practice.Findings – The analysis highlights the prevalence of an information literacy skills approach throughout the oral communication curriculum. Textbooks, assignment types and guidelines, along with grading rubrics and instructor feedback all perpetuate a skills approach. Outside class support, including peer tutors and library instruction, also contribute to a focus on information literacy over informed learning.Implications – Informed learners are better prepared to engage and apply information across contexts and to use information to continue learning. Informed learners are reflective on the knowledge they gain through information use. Therefore, this chapter concludes that public speaking courses, along with the communication centers and libraries that support oral communication instruction, should embrace an informed learning approach to the development of course materials, assignments, and teaching.Originality/value – Suggestions for reframing public speaking curriculum and support from the informed learning perspective are provided.
- Published
- 2019
21. Timbre Discrimination by Hearing-Impaired Listeners
- Author
-
Van Summers and Marjorie R. Leek
- Subjects
medicine.medical_specialty ,medicine ,Hearing impaired ,Audiology ,Psychology ,Timbre - Published
- 2019
22. Modeling Hearing Loss as an Additional Source of Masking
- Author
-
Larry E. Humes, Donna L. Neff, Walt Jesteadt, and Marjorie R. Leek
- Subjects
Masking (art) ,medicine.medical_specialty ,Hearing loss ,business.industry ,medicine ,Audiology ,medicine.symptom ,business - Published
- 2019
23. Policy debate pedagogy: a complementary strategy for civic and political engagement through service-learning
- Author
-
Danielle R. Leek
- Subjects
Higher education ,business.industry ,Communication ,Teaching method ,Information literacy ,05 social sciences ,Service-learning ,050301 education ,050801 communication & media studies ,Language and Linguistics ,Education ,0508 media and communications ,Civics ,Political science ,Pedagogy ,ComputingMilieux_COMPUTERSANDEDUCATION ,Curriculum development ,Civic engagement ,business ,0503 education ,Curriculum - Abstract
National offices and organizations, such as the U.S. Department of Education and the Association of American Colleges & Universities, have called for higher education curriculum that better prepares students for lifelong civic engagement. Many institutions respond to this appeal by creating more service-learning opportunities for students. However, service-learning alone does not promote the political learning needed for students to have effective engagement. This essay explores the potential of policy debate to complement service-learning as a means of civic education. Policy debate improves civic education by furthering information literacy and critical reasoning skills beyond the classrooms.
- Published
- 2016
24. Does he ever sleep? Contributions to science and scientific administration
- Author
-
Marjorie R. Leek
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,business.industry ,Medicine ,business ,Psychiatry ,Sleep in non-human animals ,Administration (government) - Published
- 2020
25. Auditory stream segregation of iterated rippled noises by normal-hearing and hearing-impaired listeners
- Author
-
Daniel E. Shearer, Michelle R. Molis, Keri O. Bennett, and Marjorie R. Leek
- Subjects
Auditory stream ,Auditory perception ,Adult ,Male ,medicine.medical_specialty ,Time Factors ,Acoustics and Ultrasonics ,Hearing loss ,media_common.quotation_subject ,Stimulus (physiology) ,Audiology ,01 natural sciences ,behavioral disciplines and activities ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Hearing ,Perception ,0103 physical sciences ,medicine ,otorhinolaryngologic diseases ,Humans ,Tonality ,Hearing Loss ,Pitch Perception ,010301 acoustics ,media_common ,Aged ,Auditory Threshold ,Low-pitched ,Middle Aged ,Psychological and Physiological Acoustics ,Persons With Hearing Impairments ,Acoustic Stimulation ,Case-Control Studies ,Auditory Perception ,Hearing impaired ,Female ,medicine.symptom ,Cues ,Psychology ,030217 neurology & neurosurgery ,psychological phenomena and processes - Abstract
Individuals with hearing loss are thought to be less sensitive to the often subtle variations of acoustic information that support auditory stream segregation. Perceptual segregation can be influenced by differences in both the spectral and temporal characteristics of interleaved stimuli. The purpose of this study was to determine what stimulus characteristics support sequential stream segregation by normal-hearing and hearing-impaired listeners. Iterated rippled noises (IRNs) were used to assess the effects of tonality, spectral resolvability, and hearing loss on the perception of auditory streams in two pitch regions, corresponding to 250 and 1000 Hz. Overall, listeners with hearing loss were significantly less likely to segregate alternating IRNs into two auditory streams than were normally hearing listeners. Low pitched IRNs were generally less likely to segregate into two streams than were higher pitched IRNs. High-pass filtering was a strong contributor to reduced segregation for both groups. The tonality, or pitch strength, of the IRNs had a significant effect on streaming, but the effect was similar for both groups of subjects. These data demonstrate that stream segregation is influenced by many factors including pitch differences, pitch region, spectral resolution, and degree of stimulus tonality, in addition to the loss of auditory sensitivity.
- Published
- 2018
26. Communication Masking by Man-Made Noise
- Author
-
Marjorie R. Leek and Robert J. Dooling
- Subjects
0106 biological sciences ,Auditory masking ,Broadband noise ,Computer science ,Speech recognition ,fungi ,05 social sciences ,food and beverages ,Critical ratio ,010603 evolutionary biology ,01 natural sciences ,otorhinolaryngologic diseases ,0501 psychology and cognitive sciences ,050102 behavioral science & comparative psychology - Abstract
Conservationists and regulators are often challenged with determining the masking effects of man-made sound introduced into the environment. A considerable amount is known from laboratory studies of auditory masking of communication signals in birds, so that it is now feasible to develop a functional model for estimating the masking effects of noise on acoustic communication in natural environments not only for birds but for other animals as well. Broadband noise can affect the detection, discrimination, and recognition of sounds and whether acoustic communication is judged comfortable or challenged. Estimates of these effects can be obtained from a simple measure called the critical ratio. Critical ratio data are available in both humans and a wide variety of other animals. Because humans have smaller critical ratios (i.e., hear better in noise) than other animals, human listeners can be used as a crude proxy for estimating the limits of effects on animals. That is, if a human listener can barely hear a signal in noise in the environment, it is unlikely that an animal can hear it. The key to estimating the amount of masking from noise that can occur in animals in their natural habitats is in measuring or estimating the signal and noise levels precisely at the animal’s ears in complex environments. Once that is done, a surprising amount of comparative laboratory critical ratio data exists, especially for birds, from which it is possible to predict the effect of noise on acoustic communication. Although best developed for birds, these general principles should hold for all animals.
- Published
- 2018
27. Olivocochlear Efferent Activity is Associated With the Slope of the Psychometric Function of Speech Recognition in Noise
- Author
-
Marjorie R. Leek, Ian B. Mertes, and Erin C. Wilbanks
- Subjects
Adult ,Male ,medicine.medical_specialty ,Speech perception ,Psychometrics ,Hearing loss ,media_common.quotation_subject ,Speech recognition ,Efferent ,Otoacoustic Emissions, Spontaneous ,Audiology ,Olivary Nucleus ,Article ,03 medical and health sciences ,Speech and Hearing ,0302 clinical medicine ,Psychometric function ,Hearing ,Perception ,otorhinolaryngologic diseases ,medicine ,Humans ,030223 otorhinolaryngology ,Hearing Loss ,media_common ,Decibel ,Aged ,business.industry ,Middle Aged ,Cochlea ,Noise ,Otorhinolaryngology ,Reflex ,Evoked Potentials, Auditory ,Speech Perception ,Female ,medicine.symptom ,business ,Perceptual Masking ,030217 neurology & neurosurgery - Abstract
OBJECTIVES The medial olivocochlear (MOC) efferent system can modify cochlear function to improve sound detection in noise, but its role in speech perception in noise is unclear. The purpose of this study was to determine the association between MOC efferent activity and performance on two speech-in-noise tasks at two signal-to-noise ratios (SNRs). It was hypothesized that efferent activity would be more strongly correlated with performance at the more challenging SNR, relative to performance at the less challenging SNR. DESIGN Sixteen adults aged 35 to 73 years participated. Subjects had pure-tone averages ≤25 dB HL and normal middle ear function. High-frequency pure-tone averages were computed across 3000 to 8000 Hz and ranged from 6.3 to 48.8 dB HL. Efferent activity was assessed using contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) measured in right ears, and MOC activation was achieved by presenting broadband noise to left ears. Contralateral suppression was expressed as the decibel change in TEOAE magnitude obtained with versus without the presence of the broadband noise. TEOAE responses were also examined for middle ear muscle reflex activation and synchronous spontaneous otoacoustic emissions (SSOAEs). Speech-in-noise perception was assessed using the closed-set coordinate response measure word recognition task and the open-set Institute of Electrical and Electronics Engineers sentence task. Speech and noise were presented to right ears at two SNRs. Performance on each task was scored as percent correct. Associations between contralateral suppression and speech-in-noise performance were quantified using partial rank correlational analyses, controlling for the variables age and high-frequency pure-tone average. RESULTS One subject was excluded due to probable middle ear muscle reflex activation. Subjects showed a wide range of contralateral suppression values, consistent with previous reports. Three subjects with SSOAEs had similar contralateral suppression results as subjects without SSOAEs. The magnitude of contralateral suppression was not significantly correlated with speech-in-noise performance on either task at a single SNR (p > 0.05), contrary to hypothesis. However, contralateral suppression was significantly correlated with the slope of the psychometric function, computed as the difference between performance levels at the two SNRs divided by 3 (decibel difference between the 2 SNRs) for the coordinate response measure task (partial rs = 0.59; p = 0.04) and for the Institute of Electrical and Electronics Engineers task (partial rs = 0.60; p = 0.03). CONCLUSIONS In a group of primarily older adults with normal hearing or mild hearing loss, olivocochlear efferent activity assessed using contralateral suppression of TEOAEs was not associated with speech-in-noise performance at a single SNR. However, auditory efferent activity appears to be associated with the slope of the psychometric function for both a word and sentence recognition task in noise. Results suggest that individuals with stronger MOC efferent activity tend to be more responsive to changes in SNR, where small increases in SNR result in better speech-in-noise performance relative to individuals with weaker MOC efferent activity. Additionally, the results suggest that the slope of the psychometric function may be a more useful metric than performance at a single SNR when examining the relationship between speech recognition in noise and MOC efferent activity.
- Published
- 2018
28. Acoustical Society of America Silver Medal in Psychological and Physiological Acoustics: Roy D. Patterson
- Author
-
Ray Meddis, William A. Yost, and Marjorie R. Leek
- Subjects
Medal ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,media_common.quotation_subject ,Art history ,Art ,media_common - Published
- 2015
29. Syllable-constituent perception by hearing-aid users: Common factors in quiet and noise
- Author
-
Judy R. Dubno, Marjorie R. Leek, Sandra Gordon-Salant, David J. Wark, Charles S. Watson, James D. Miller, Jayne B. Ahlstrom, and Pamela E. Souza
- Subjects
Hearing aid ,Male ,Acoustics and Ultrasonics ,medicine.medical_treatment ,Audiology ,01 natural sciences ,0302 clinical medicine ,Hearing Aids ,Hearing ,030223 otorhinolaryngology ,010301 acoustics ,media_common ,Aged, 80 and over ,medicine.diagnostic_test ,Phonetics ,Middle Aged ,QUIET ,Speech Perception ,Audiometry, Pure-Tone ,Female ,Syllable ,Psychology ,Perceptual Masking ,Speech Communication ,Adult ,medicine.medical_specialty ,Speech perception ,Voice Quality ,Acoustics ,media_common.quotation_subject ,Hearing Loss, Sensorineural ,Speech Acoustics ,03 medical and health sciences ,Arts and Humanities (miscellaneous) ,Perception ,0103 physical sciences ,medicine ,Humans ,Correction of Hearing Impairment ,Aged ,Speech Intelligibility ,Auditory Threshold ,Electric Stimulation ,Noise ,Persons With Hearing Impairments ,Acoustic Stimulation ,Audiometry ,Audiometry, Speech - Abstract
The abilities of 59 adult hearing-aid users to hear phonetic details were assessed by measuring their abilities to identify syllable constituents in quiet and in differing levels of noise (12-talker babble) while wearing their aids. The set of sounds consisted of 109 frequently occurring syllable constituents (45 onsets, 28 nuclei, and 36 codas) spoken in varied phonetic contexts by eight talkers. In nominal quiet, a speech-to-noise ratio (SNR) of 40 dB, scores of individual listeners ranged from about 23% to 85% correct. Averaged over the range of SNRs commonly encountered in noisy situations, scores of individual listeners ranged from about 10% to 71% correct. The scores in quiet and in noise were very strongly correlated, R = 0.96. This high correlation implies that common factors play primary roles in the perception of phonetic details in quiet and in noise. Otherwise said, hearing-aid users' problems perceiving phonetic details in noise appear to be tied to their problems perceiving phonetic details in quiet and vice versa.
- Published
- 2017
30. JMY protein, a regulator of P53 and cytoplasmic actin filaments, is expressed in normal and neoplastic tissues
- Author
-
Omanma Adighibe, N B La Thangue, K C Gatter, R Leek, Helen Turley, Amanda S. Coutts, Adrian L. Harris, and Francesco Pezzella
- Subjects
Pathology ,medicine.medical_specialty ,Blotting, Western ,Motility ,Biology ,Pathology and Forensic Medicine ,Antibody Specificity ,Neoplasms ,medicine ,Humans ,Nuclear protein ,Cytoskeleton ,Molecular Biology ,Actin ,Oncogene ,Antibodies, Monoclonal ,Nuclear Proteins ,Cell Biology ,General Medicine ,Transfection ,Actin cytoskeleton ,Immunohistochemistry ,Cell biology ,Actin Cytoskeleton ,Tissue Array Analysis ,Cytoplasm ,MCF-7 Cells ,Trans-Activators ,Tumor Suppressor Protein p53 ,HeLa Cells - Abstract
JMY is a p300-binding protein with dual action: by enhancing P53 transcription in the nucleus, it plays an important role in the cellular response to DNA damage, while by promoting actin filament assembly in the cytoplasm; it induces cell motility in vitro. Therefore, it has been argued that, depending of the cellular setting, it might act either as tumor suppressor or as oncogene. In order to further determine its relevance to human cancer, we produced the monoclonal antibody HMY 117 against a synthetic peptide from the N-terminus region and characterized it on two JMY positive cell lines, MCF7 and HeLa, wild type and after transfection with siRNA to switch off JMY expression. JMY was expressed in normal tissues and heterogeneously in different tumor types, with close correlation between cytoplasmic and nuclear expression. Most noticeable was the loss of expression in some infiltrating carcinomas compared to normal tissue and in in situ carcinomas of the breast, which is consistent with a putative suppressor role. However, as in lymph node metastases, expression of JMY was higher than in primary colorectal and head and neck carcinomas, it might also have oncogenic properties depending on the cellular context by increasing motility and metastatic potential.
- Published
- 2014
31. Spectrotemporal modulation sensitivity for hearing-impaired listeners: Dependence on carrier center frequency and the relationship to speech intelligibility
- Author
-
Joshua G. W. Bernstein, Golbarg Mehraei, Marjorie R. Leek, and Frederick J. Gallun
- Subjects
Psychological Acoustics [66] ,Adult ,Male ,medicine.medical_specialty ,Sound Spectrography ,Time Factors ,Speech perception ,Acoustics and Ultrasonics ,Acoustics ,Ripple ,Audiology ,Young Adult ,Audiometry ,Arts and Humanities (miscellaneous) ,medicine ,Humans ,Psychoacoustics ,Center frequency ,Physics ,medicine.diagnostic_test ,Speech Intelligibility ,Auditory Threshold ,Audiogram ,Middle Aged ,Noise ,Persons With Hearing Impairments ,Acoustic Stimulation ,Modulation ,Speech Perception ,Female ,Cues ,Comprehension ,Perceptual Masking - Abstract
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.
- Published
- 2014
32. Sentence perception in noise by hearing-aid users: Relations with syllable-constituent perception and the use of context
- Author
-
James D. Miller, Sandra Gordon-Salant, David J. Wark, Jayne B. Ahlstrom, Judy R. Dubno, Charles S. Watson, Marjorie R. Leek, and Pamela E. Souza
- Subjects
Hearing aid ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,medicine.medical_treatment ,Speech recognition ,Perception ,media_common.quotation_subject ,QUIET ,medicine ,Psychology ,Sentence ,media_common - Abstract
Masked sentence perception by hearing-aid users is strongly correlated with three variables: (1) the ability to hear phonetic details as estimated by the identification of syllable constituents in quiet or in noise; (2) the ability to use situational context that is extrinsic to the speech signal; and (3) the ability to use inherent context provided by the speech signal itself. These conclusions are supported by the performance of 57 hearing-aid users in the identification of 109 syllable constituents presented in a background of 12-talker babble and the identification of words in naturally spoken sentences presented in the same babble. A mathematical model is offered that allows calculation of an individual listener's sentence scores from estimates of context utilization and the ability to identify syllable constituents. When the identification accuracy of syllable constituents is greater than about 55%, individual differences in context utilization play a minor role in determining the sentence scores. As syllable constituent scores fall below 55%, individual differences in context utilization play an increasingly greater role in determining sentence scores. When a listener's syllable constituent score is above about 71% in quiet, the listeners score in quiet will above about 55% in noise. [Watson and Miller are shareholders in Communication Disorders Technology, Inc., and may profit from sales of the software used in this study.] Masked sentence perception by hearing-aid users is strongly correlated with three variables: (1) the ability to hear phonetic details as estimated by the identification of syllable constituents in quiet or in noise; (2) the ability to use situational context that is extrinsic to the speech signal; and (3) the ability to use inherent context provided by the speech signal itself. These conclusions are supported by the performance of 57 hearing-aid users in the identification of 109 syllable constituents presented in a background of 12-talker babble and the identification of words in naturally spoken sentences presented in the same babble. A mathematical model is offered that allows calculation of an individual listener's sentence scores from estimates of context utilization and the ability to identify syllable constituents. When the identification accuracy of syllable constituents is greater than about 55%, individual differences in context utilization play a minor role in determining the sentence scores. A...
- Published
- 2019
33. Introduction: Auditory Models of Suprathreshold Distortion in Persons with Impaired Hearing
- Author
-
Van Summers, Walden Be, Ken W. Grant, and Marjorie R. Leek
- Subjects
Speech and Hearing ,medicine.medical_specialty ,business.industry ,Distortion ,Medicine ,Audiology ,business - Published
- 2013
34. Suprathreshold Auditory Processing and Speech Perception in Noise: Hearing-Impaired and Normal-Hearing Listeners
- Author
-
Sarah M. Theodoroff, Matthew J. Makashay, Marjorie R. Leek, and Van Summers
- Subjects
medicine.medical_specialty ,Speech perception ,Auditory masking ,medicine.diagnostic_test ,Hearing loss ,Speech recognition ,Auditory phonetics ,Audiology ,Speech and Hearing ,Noise ,medicine ,Psychoacoustics ,medicine.symptom ,Audiometry ,Psychology ,Frequency modulation - Abstract
Background: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. Purpose: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. Research Design: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. Study Sample: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). Results: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. Conclusions: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.
- Published
- 2013
35. Concurrent measures of contralateral suppression of transient-evoked otoacoustic emissions and of auditory steady-state responsesa)
- Author
-
Ian B. Mertes and Marjorie R. Leek
- Subjects
medicine.medical_specialty ,Steady state (electronics) ,Acoustics and Ultrasonics ,Hearing loss ,Acoustics ,Efferent ,Audiology ,01 natural sciences ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,0103 physical sciences ,medicine ,otorhinolaryngologic diseases ,Sound pressure ,010301 acoustics ,Cochlea ,Extramural ,business.industry ,Broadband noise ,medicine.disease ,Psychological and Physiological Acoustics ,Sensorineural hearing loss ,medicine.symptom ,business ,030217 neurology & neurosurgery - Abstract
Contralateral suppression of otoacoustic emissions (OAEs) is frequently used to assess the medial olivocochlear (MOC) efferent system, and may have clinical utility. However, OAEs are weak or absent in hearing-impaired ears, so little is known about MOC function in the presence of hearing loss. A potential alternative measure is contralateral suppression of the auditory steady-state response (ASSR) because ASSRs are measurable in many hearing-impaired ears. This study compared contralateral suppression of both transient-evoked otoacoustic emissions (TEOAEs) and ASSRs in a group of ten primarily older adults with either normal hearing or mild sensorineural hearing loss. Responses were elicited using 75-dB peak sound pressure level clicks. The MOC was activated using contralateral broadband noise at 60 dB sound pressure level. Measurements were made concurrently to ensure a consistent attentional state between the two measures. The magnitude of contralateral suppression of ASSRs was significantly larger than contralateral suppression of TEOAEs. Both measures usually exhibited high test–retest reliability within a session. However, there was no significant correlation between the magnitude of contralateral suppression of TEOAEs and of ASSRs. Further work is needed to understand the role of the MOC in contralateral suppression of ASSRs.
- Published
- 2016
36. The prognostic value of quantitative angiogenesis in breast cancer and role of adhesion molecule expression in tumor endothelium
- Author
-
R Leek, R. M. Whitehouse, Gareth D. H. Turner, K C Gatter, Stephen B. Fox, and Adrian L. Harris
- Subjects
Cancer Research ,Pathology ,medicine.medical_specialty ,Endothelium ,Angiogenesis ,Immunoglobulins ,Breast Neoplasms ,Metastasis ,Neovascularization ,Breast cancer ,medicine ,Humans ,Breast ,Analysis of Variance ,ICAM-1 ,Neovascularization, Pathologic ,Cell adhesion molecule ,business.industry ,Middle Aged ,Prognosis ,medicine.disease ,Immunohistochemistry ,Survival Analysis ,Phenotype ,medicine.anatomical_structure ,Oncology ,Selectins ,Female ,medicine.symptom ,business ,Cell Adhesion Molecules ,Selectin ,Follow-Up Studies - Abstract
Angiogenesis is the formation of new capillaries from the existing vascular network and is essential for tumor growth and metastases. Increased microvessel density in breast cancer is associated with lymph node metastasis and reduced survival. We have assessed tumor vascularity in 211 breast carcinomas using a more rapid technique based on a Chalkley point eyepiece graticule. We confirmed using this method a significant reduction in overall survival between patients stratified by Chalkley count in both a univariate (p = 0.02) and multivariate (p = 0.05) analysis. Since studies have suggested that cell adhesion molecules (CAMs) might be important in the angiogenic process, and interaction of neoplastic cells with this neovasculature is a significant step in tumor metastasis, we have also examined the expression of CAMs in a subset of these tumors (n = 64). Using immunohistochemistry we observed widespread and intense staining on the endothelium of tumor-associated vessels for PECAM (100%), ICAM 1 (69%), and E- and P-selectins (52% and 59% of cases respectively). Endothelial expression of the selectins was more prominent at the tumor periphery. Immunoreactivity of ICAM-1 (34%), PECAM (1.6%), and E- and P-selectins (7% and 37% of cases respectively) was also observed on the neoplastic element of the tumors.
- Published
- 2016
37. Using High-Performance Computing to Support Water Resource Planning: A Workshop Demonstration of Real-Time Analytic Facilitation for the Colorado River Basin
- Author
-
Deborah W. May, James R. Leek, James Syme, David G. Groves, and Robert J. Lempert
- Subjects
geography ,geography.geographical_feature_category ,Operations research ,Process (engineering) ,media_common.quotation_subject ,Drainage basin ,Supercomputer ,Deliberation ,Robust decision-making ,Engineering management ,Facilitation ,Environmental science ,Water resource planning ,Natural resource management ,media_common - Abstract
In November 2014, the RAND Corporation and Lawrence Livermore National Laboratory organized a workshop to explore the use of high-performance computing to support real-time analytic facilitation of multiscenario natural resource management planning. This document summarizes results and attendees' observations regarding the benefits and challenges associated with using high-performance computing in such a process of deliberation with analysis.
- Published
- 2016
38. Vowel Identification by Amplitude and Phase Contrast
- Author
-
Marjorie R. Leek, Anna C. Diedesch, Frederick J. Gallun, and Michelle R. Molis
- Subjects
Adult ,medicine.medical_specialty ,Speech perception ,Speech recognition ,Phase (waves) ,Audiology ,behavioral disciplines and activities ,Phonetics ,Vowel ,otorhinolaryngologic diseases ,medicine ,Humans ,Hearing Loss ,Auditory Threshold ,Contrast (music) ,Middle Aged ,Sensory Systems ,Amplitude ,Formant ,Acoustic Stimulation ,Otorhinolaryngology ,Speech Discrimination Tests ,Speech Perception ,Harmonic ,Psychology ,psychological phenomena and processes ,Research Article - Abstract
Vowel identification is largely dependent on listeners’ access to the frequency of two or three peaks in the amplitude spectrum. Earlier work has demonstrated that, whereas normal-hearing listeners can identify harmonic complexes with vowel-like spectral shapes even with very little amplitude contrast between “formant” components and remaining harmonic components, listeners with hearing loss require greater amplitude differences. This is likely the result of the poor frequency resolution that often accompanies hearing loss. Here, we describe an additional acoustic dimension for emphasizing formant versus non-formant harmonics that may supplement amplitude contrast information. The purpose of this study was to determine whether listeners were able to identify “vowel-like” sounds using temporal (component phase) contrast, which may be less affected by cochlear loss than spectral cues, and whether overall identification improves when congruent temporal and spectral information are provided together. Five normal-hearing and five hearing-impaired listeners identified three vowels over many presentations. Harmonics representing formant peaks were varied in amplitude, phase, or a combination of both. In addition to requiring less amplitude contrast, normal-hearing listeners could accurately identify the sounds with less phase contrast than required by people with hearing loss. However, both normal-hearing and hearing-impaired groups demonstrated the ability to identify vowel-like sounds based solely on component phase shifts, with no amplitude contrast information, and they also showed improved performance when congruent phase and amplitude cues were combined. For nearly all listeners, the combination of spectral and temporal information improved identification in comparison to either dimension alone.
- Published
- 2012
39. Neural Encoding and Perception of Speech Signals in Informational Masking
- Author
-
Michelle R. Molis, Curtis J. Billings, Marjorie R. Leek, and Keri O’Connell Bennett
- Subjects
Adult ,Male ,Speech perception ,media_common.quotation_subject ,Speech recognition ,Perceptual Masking ,Signal-To-Noise Ratio ,behavioral disciplines and activities ,Article ,Young Adult ,Speech and Hearing ,Discrimination, Psychological ,Signal-to-noise ratio ,Phonetics ,Event-related potential ,Perception ,Reaction Time ,otorhinolaryngologic diseases ,Humans ,media_common ,Event-Related Potentials, P300 ,Noise ,Acoustic Stimulation ,Otorhinolaryngology ,Pattern Recognition, Physiological ,QUIET ,Evoked Potentials, Auditory ,Speech Perception ,Female ,Psychology ,Psychomotor Performance ,psychological phenomena and processes - Abstract
Objective To investigate the contributions of energetic and informational masking to neural encoding and perception in noise, using oddball discrimination and sentence recognition tasks. Design P3 auditory evoked potential, behavioral discrimination, and sentence recognition data were recorded in response to speech and tonal signals presented to nine normal-hearing adults. Stimuli were presented at a signal to noise ratio of -3 dB in four background conditions: quiet, continuous noise, intermittent noise, and four-talker babble. Results Responses to tonal signals were not significantly different for the three maskers. However, responses to speech signals in the four-talker babble resulted in longer P3 latencies, smaller P3 amplitudes, poorer discrimination accuracy, and longer reaction times than in any of the other conditions. Results also demonstrate significant correlations between physiological and behavioral data. As latency of the P3 increased, reaction times also increased and sentence recognition scores decreased. Conclusion The data confirm a differential effect of masker type on the P3 and behavioral responses and present evidence of interference by an informational masker to speech understanding at the level of the cortex. Results also validate the use of the P3 as a useful measure to demonstrate physiological correlates of informational masking.
- Published
- 2012
40. Evaluation of Speech-Perception Training for Hearing Aid Users: A Multisite Study in Progress
- Author
-
Charles S. Watson, James D. Miller, Judy R. Dubno, and Marjorie R. Leek
- Subjects
Hearing aid ,medicine.medical_specialty ,Speech perception ,medicine.medical_treatment ,media_common.quotation_subject ,Cognition ,Audiology ,Article ,Speech and Hearing ,Noise ,QUIET ,Perception ,medicine ,Narrative ,Syllable ,Psychology ,media_common - Abstract
Following an overview of theoretical issues in speech-perception training and of previous efforts to enhance hearing aid use through training, a multisite study, designed to evaluate the efficacy of two types of computerized speech-perception training for adults who use hearing aids, is described. One training method focuses on the identification of 109 syllable constituents (45 onsets, 28 nuclei, and 36 codas) in quiet and in noise, and on the perception of words in sentences presented in various levels of noise. In a second type of training, participants listen to 6- to 7-minute narratives in noise and are asked several questions about each narrative. Two groups of listeners are trained, each using one of these types of training, performed in a laboratory setting. The training for both groups is preceded and followed by a series of speech-perception tests. Subjects listen in a sound field while wearing their hearing aids at their usual settings. The training continues over 15 to 20 visits, with subjects completing at least 30 hours of focused training with one of the two methods. The two types of training are described in detail, together with a summary of other perceptual and cognitive measures obtained from all participants.
- Published
- 2015
41. Erratum: Concurrent measures of contralateral suppression of transient-evoked otoacoustic emissions and of auditory steady-state responses [J. Acoust. Soc. Am. 140(3), 2027–2038 (2016)]
- Author
-
Marjorie R. Leek and Ian B. Mertes
- Subjects
Adult ,Male ,Physics ,Steady state (electronics) ,Errata ,Acoustics and Ultrasonics ,Otoacoustic Emissions, Spontaneous ,Reproducibility of Results ,Mechanics ,Deafness ,Middle Aged ,Cochlea ,Acoustic Stimulation ,Arts and Humanities (miscellaneous) ,Humans ,Female ,Transient (oscillation) ,Aged - Abstract
Contralateral suppression of otoacoustic emissions (OAEs) is frequently used to assess the medial olivocochlear (MOC) efferent system, and may have clinical utility. However, OAEs are weak or absent in hearing-impaired ears, so little is known about MOC function in the presence of hearing loss. A potential alternative measure is contralateral suppression of the auditory steady-state response (ASSR) because ASSRs are measurable in many hearing-impaired ears. This study compared contralateral suppression of both transient-evoked otoacoustic emissions (TEOAEs) and ASSRs in a group of ten primarily older adults with either normal hearing or mild sensorineural hearing loss. Responses were elicited using 75-dB peak sound pressure level clicks. The MOC was activated using contralateral broadband noise at 60 dB sound pressure level. Measurements were made concurrently to ensure a consistent attentional state between the two measures. The magnitude of contralateral suppression of ASSRs was significantly larger than contralateral suppression of TEOAEs. Both measures usually exhibited high test-retest reliability within a session. However, there was no significant correlation between the magnitude of contralateral suppression of TEOAEs and of ASSRs. Further work is needed to understand the role of the MOC in contralateral suppression of ASSRs.
- Published
- 2017
42. Cortical Encoding of Signals in Noise: Effects of Stimulus Type and Recording Paradigm
- Author
-
Keri O. Bennett, Curtis J. Billings, Marjorie R. Leek, and Michelle R. Molis
- Subjects
Adult ,Male ,Sound Spectrography ,Speech perception ,Speech recognition ,media_common.quotation_subject ,Stimulus (physiology) ,Electroencephalography ,Article ,Background noise ,Young Adult ,Speech and Hearing ,Perception ,medicine ,Humans ,Auditory system ,Oddball paradigm ,media_common ,medicine.diagnostic_test ,Signal Processing, Computer-Assisted ,medicine.anatomical_structure ,Acoustic Stimulation ,Otorhinolaryngology ,QUIET ,Evoked Potentials, Auditory ,Speech Perception ,Female ,Psychology ,Perceptual Masking - Abstract
Objectives Perception-in-noise deficits have been demonstrated across many populations and listening conditions. Many factors contribute to successful perception of auditory stimuli in noise, including neural encoding in the central auditory system. Physiological measures such as cortical auditory-evoked potentials (CAEPs) can provide a view of neural encoding at the level of the cortex that may inform our understanding of listeners' abilities to perceive signals in the presence of background noise. To understand signal-in-noise neural encoding better, we set out to determine the effect of signal type, noise type, and evoking paradigm on the P1-N1-P2 complex. Design Tones and speech stimuli were presented to nine individuals in quiet and in three background noise types: continuous speech spectrum noise, interrupted speech spectrum noise, and four-talker babble at a signal-to-noise ratio of -3 dB. In separate sessions, CAEPs were evoked by a passive homogenous paradigm (single repeating stimulus) and an active oddball paradigm. Results The results for the N1 component indicated significant effects of signal type, noise type, and evoking paradigm. Although components P1 and P2 also had significant main effects of these variables, only P2 demonstrated significant interactions among these variables. Conclusions Signal type, noise type, and evoking paradigm all must be carefully considered when interpreting signal-in-noise evoked potentials. Furthermore, these data confirm the possible usefulness of CAEPs as an aid to understand perception-in-noise deficits.
- Published
- 2011
43. Development of a Computer-Based, Multi-media Hearing Loss Prevention Education Program for Veterans and Military Personnel
- Author
-
Marjorie R. Leek, Robert L. Folmer, Gabrielle H. Saunders, Serena M. Dann, Stephen A. Fausti, and Susan Griest
- Subjects
medicine.medical_specialty ,business.industry ,Hearing loss ,Prevention education ,Computer based ,Audiology ,medicine.disease ,Knowledge acquisition ,Military personnel ,otorhinolaryngologic diseases ,Medicine ,Medical emergency ,medicine.symptom ,business ,Tinnitus - Abstract
Purpose: Noise-induced hearing loss and tinnitus are prevalent and costly problems for military personnel and Veterans. To reduce the prevalence and burden of these conditions, the Department of Defense and the Department of Veterans Affairs are working together to develop an interactive, computer-based, multimedia hearing loss prevention education program that can be delivered at military bases, primary care or other medical settings. Method: One participant at a time interacts with the program inside a sound-attenuated enclosure that is large enough for wheelchair access. A computer touch screen allows participants to select among a variety of activities, including a self-administered screening test of high-frequency hearing; learning why, when, and how to protect hearing; learning how hearing works and how loud sounds damage hearing; learning how sound intensity is measured and which sounds are too loud; listening to demonstrations of simulated hearing loss; learning how to select and use hearing protective devices; learning about tinnitus; and learning about hearing health care services available at each site. Results/Conclusions: The program will be made available to all Veterans, military personnel, and other members of the public through the internet and at medical centers throughout the country.
- Published
- 2010
44. Beyond Audibility: Hearing Loss and the Perception of Speech
- Author
-
Michelle R. Molis and Marjorie R. Leek
- Subjects
Speech and Hearing ,medicine.medical_specialty ,Speech perception ,Hearing loss ,Perception ,media_common.quotation_subject ,medicine ,Auditory phonetics ,medicine.symptom ,Audiology ,Psychology ,media_common - Published
- 2009
45. Zinc presence in invasive ductal carcinoma of the breast and its correlation with oestrogen receptor status
- Author
-
Adrian M. Jubb, Michael J. Farquharson, R Leek, K. Geraki, A Al-Ebraheem, and Adrian L. Harris
- Subjects
Pathology ,medicine.medical_specialty ,chemistry [Carcinoma, Ductal, Breast] ,analysis [Zinc] ,Statistics as Topic ,Cell ,chemistry.chemical_element ,Zinc ,analysis [Tumor Markers, Biological] ,Models, Biological ,Andrology ,ddc:570 ,Biomarkers, Tumor ,medicine ,Carcinoma ,Humans ,Computer Simulation ,Radiology, Nuclear Medicine and imaging ,Oestrogen receptor ,Receptor ,Models, Statistical ,Radiological and Ultrasound Technology ,analysis [Receptors, Estrogen] ,Chemistry ,Carcinoma, Ductal, Breast ,Transporter ,medicine.disease ,medicine.anatomical_structure ,Receptors, Estrogen ,Tumor Markers, Biological ,Female ,Efflux ,Hormone - Abstract
Zinc is known to play an important role in many cellular processes, and the levels of zinc are controlled by specific transporters from the ZIP (SLC39A) influx transporter group and the ZnT (SLC30A) efflux transporter group. The distribution of zinc was measured in 59 samples of invasive ductal carcinoma of breast using synchrotron radiation micro probe x-ray fluorescence facilities. The samples were formalin fixed paraffin embedded tissue micro arrays (TMAs) enabling a high throughput of samples and allowing us to correlate the distribution of trace metals with tumour cell distribution and, for the first time, important biological variables. The samples were divided into two classes, 34 oestrogen receptor positive (ER+ve) and 25 oestrogen receptor negative (ER-ve) based on quantitative immunohistochemistry assessment. The overall levels of zinc (i.e. in tumour and surrounding tissue) in the ER+ve samples were on average 60% higher than those in the ER-ve samples. The zinc levels were higher in the ER+ve tumour areas compared to the ER-ve tumour areas with the mean levels in the ER+ve samples being approximately 80% higher than the mean ER-ve levels. However, the non-tumour tissue regions of the samples contained on average the same levels of zinc in both types of breast cancers. The relative levels of zinc in tumour areas of the tissue were compared with levels in areas of non-tumour surrounding tissue. There was a significant increase in zinc in the tumour regions of the ER+ve samples compared to the surrounding regions (P < 0.001) and a non-significant increase in the ER-ve samples. When comparing the increase in zinc in the tumour regions expressed as a percentage of the surrounding non-tumour tissue zinc level in the same sample, a significant difference between the ER+ve and ER-ve samples was found (P < 0.01).
- Published
- 2009
46. Lactate Dehydrogenase 5 Expression in Squamous Cell Head and Neck Cancer Relates to Prognosis following Radical or Postoperative Radiotherapy
- Author
-
Michael I. Koukourakis, Stuart Winter, Efthimios Sivridis, Alexandra Giatromanolaki, Adrian L. Harris, and R Leek
- Subjects
Adult ,Male ,Cancer Research ,Pathology ,medicine.medical_specialty ,medicine.medical_treatment ,chemistry.chemical_compound ,Lactate dehydrogenase ,Carcinoma ,Humans ,Medicine ,Aged ,L-Lactate Dehydrogenase ,business.industry ,Head and neck cancer ,Cancer ,General Medicine ,Middle Aged ,Hypoxia-Inducible Factor 1, alpha Subunit ,Prognosis ,medicine.disease ,Immunohistochemistry ,Isoenzymes ,Radiation therapy ,Oncology ,Epidermoid carcinoma ,chemistry ,Head and Neck Neoplasms ,Anaerobic glycolysis ,Multivariate Analysis ,Carcinoma, Squamous Cell ,Cancer research ,Female ,Lactate Dehydrogenase 5 ,business - Abstract
Objectives: We assessed the expression and the prognostic role of lactate dehydrogenase 5 (LDH5, the major LDH isoenzyme involved in anaerobic glycolysis) in patients with squamous cell head and neck cancer (SCHNC). Methods: LDH5 was assessed immunohistochemically in whole tissue sections from 141 patients with SCHNC. Of these, 102 were subjected to surgery with (90 patients) or without (12 patients) postoperative radiotherapy (group A), while 39 patients were treated with radical radiotherapy (group B). Results: Mixed nuclear/cytoplasmic LDH5 expression was detected in 72.5% of group A and 61.5% of group B patients. This was significantly related to T4-stage (p = 0.04) and hypoxia-inducible factor-1α (HIF-1α) expression (p = 0.002). In group A, high LDH5 was linked with poorer distant metastasis-free survival (p = 0.01) and disease-specific overall survival (OS; p = 0.009). In multivariate analysis, LDH5 (p = 0.002) and HIF-1α (p = 0.01) were independently linked with distant metastasis. LDH5 was also linked with death events (p = 0.005). In group B, high LDH5 expression was significantly associated with poorer local relapse-free survival (p = 0.009) and OS (p = 0.01). In multivariate analysis, only T stage was a significant predictor of death events (p = 0.04). Conclusions: LDH5 is highly expressed in SCHNC and is linked with local relapse, survival and distant metastasis, suggesting that LDH5 is a marker of radioresistance and a target for therapeutic interventions.
- Published
- 2009
47. Prognostic significance of microvessel density and other variables in Japanese and British patients with primary invasive breast cancer
- Author
-
Graham Steers, Leticia Campo, S Kameoka, K C Gatter, Takao Kato, Francesco Pezzella, Helen Turley, R Leek, T Nishikawa, Adrian L. Harris, T Kimura, Helen Roberts, and M Kobayashi
- Subjects
Adult ,Cancer Research ,medicine.medical_specialty ,Aging ,medicine.medical_treatment ,international differences ,Breast Neoplasms ,Gastroenterology ,Metastasis ,angiogenesis ,Breast cancer ,breast cancer ,microvessel density ,Japan ,Internal medicine ,medicine ,Humans ,Neoplasm Invasiveness ,Stage (cooking) ,Pathological ,Survival rate ,Molecular Diagnostics ,Aged ,Neoplasm Staging ,Aged, 80 and over ,Chemotherapy ,Neovascularization, Pathologic ,business.industry ,Hazard ratio ,Cancer ,Middle Aged ,medicine.disease ,Prognosis ,Combined Modality Therapy ,United Kingdom ,Surgery ,Survival Rate ,Treatment Outcome ,Oncology ,Receptors, Estrogen ,Chemotherapy, Adjuvant ,Female ,Neoplasm Recurrence, Local ,business ,Receptors, Progesterone - Abstract
The purpose of this study is to investigate the associations of microvessel density (MVD) and other pathological variables with survival, and whether they accounted for survival differences between Japanese and British patients. One hundred seventy-three Japanese and 184 British patients were included in the study. British patients were significantly older (56.3+/-11.4 years vs 52.5+/-12.9 years; P
- Published
- 2007
48. Using High Performance Computing to Support Water Resource Planning
- Author
-
Deborah W. May, James R. Leek, David G. Groves, James Syme, and Robert J. Lempert
- Subjects
Iterative and incremental development ,Decision support system ,Computer science ,Process (engineering) ,Analytics ,business.industry ,Management science ,Simulation modeling ,Water resource planning ,business ,Supercomputer ,Robust decision-making - Abstract
In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.
- Published
- 2015
49. Roundtable Discussion: Pathophysiology of the Aging Auditory System
- Author
-
Dawn Konrad-Martin, Linda J. Hood, John H. Mills, and Marjorie R. Leek
- Subjects
Speech and Hearing ,medicine.anatomical_structure ,medicine ,Auditory system ,Stem cell ,Psychology ,Neuroscience ,Pathophysiology - Published
- 2006
50. Perception of dissonance by people with normal hearing and sensorineural hearing loss
- Author
-
Jennifer B. Tufts, Marjorie R. Leek, and Michelle R. Molis
- Subjects
Adult ,Male ,medicine.medical_specialty ,Acoustics and Ultrasonics ,Hearing loss ,Hearing Loss, Sensorineural ,media_common.quotation_subject ,Acoustics ,Audiology ,Arts and Humanities (miscellaneous) ,Frequency separation ,Perception ,medicine ,Octave ,Humans ,Contrast (vision) ,Psychoacoustics ,Aged ,media_common ,Aged, 80 and over ,Consonance and dissonance ,Middle Aged ,medicine.disease ,Acoustic Stimulation ,Case-Control Studies ,Auditory Perception ,Female ,Sensorineural hearing loss ,medicine.symptom ,Psychology ,Music - Abstract
The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000 Hz, and of HC dyads with fundamental frequencies geometrically centered at 500 Hz. The frequency separation of the members of the dyads varied from 0 Hz to just over an octave. In addition, frequency selectivity was assessed at 500 and 2000 Hz for each listener. Maximum dissonance was perceived at frequency separations smaller than the auditory filter bandwidth for both groups of listners, but maximum dissonance for HI listeners occurred at a greater proportion of their bandwidths at 500 Hz than at 2000 Hz. Further, their auditory filter bandwidths at 500 Hz were significantly wider than those of the NH listeners. For both the PT and HC dyads, curves displaying dissonance as a function of frequency separation were more compressed for the HI listeners, possibly reflecting less contrast between their perceptions of consonance and dissonance compared with the NH listeners.
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.