30 results on '"COMPLEX TONES"'
Search Results
2. On the assessment of subjective response to tonal content of contemporary aircraft noise.
- Author
-
Torija, Antonio J., Roberts, Seth, Woodward, Robin, Flindell, Ian H., McKenzie, Andrew R., and Self, Rod H.
- Subjects
- *
AIRCRAFT noise , *TONALITY , *PSYCHOACOUSTICS , *NOISE pollution , *AIRPLANE motors - Abstract
Abstract The Effective Perceived Noise Level (EPNL) is the primary metric used for assessing subjective response to aircraft noise. The EPNL comprises calculation of the Perceived Noise Level (in PNdB), and takes into account flyover duration and the presence of pure tones to arrive at an adjusted EPNL value. With the presence of a single significant tone, EPNL has been found to be reasonably effective for the assessment of aircraft noise annoyance. Several authors have, however, suggested that EPNL is not capable of quantifying the subjective response to aircraft noise that contains multiple complex tones. The noise source referred to as "Buzz-saw" noise is a typical example of complex tonal content in aircraft noise with an important effect on both cabin and community noise impact. This paper presents the results of a series of listening tests where a number of participants were exposed to samples of aircraft noise with six variants of aircraft engines, assumed representative of the contemporary twin engine aircraft fleet. On the basis of the findings of these listening tests, the Aures tonality method significantly outperforms the EPNL tone correction method when assessing the subjective response to aircraft noise during take-off with the presence of multiple complex tones. The participants reported 'high pitch' as one of the least preferable aircraft noise characteristics, and consequently, the psychoacoustics metric Sharpness was found to be another important contributor to subjective response to the noise of two specific aircraft engine groups (out of the six considered). The limitations of Aures tonality are discussed, in particular for aircraft noise with both a series of complex tones spaced evenly across the frequency spectrum with relatively even sound levels and less subjectively dominant single frequency tones (compared to broadband noise). In line with these limitations, further work is proposed for more effective assessment of subjective response to aircraft noise containing significant tonal content in the form of numerous closely spaced or other complex tones. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Assessing the Possible Role of Frequency-Shift Detectors in the Ability to Hear Out Partials in Complex Tones
- Author
-
Moore, Brian C. J., Kenyon, Olivia, Glasberg, Brian R., Demany, Laurent, Moore, Brian C. J., editor, Patterson, Roy D., editor, Winter, Ian M., editor, Carlyon, Robert P., editor, and Gockel, Hedwig E, editor
- Published
- 2013
- Full Text
- View/download PDF
4. Effects of Sensorineural Hearing Loss on Temporal Coding of Harmonic and Inharmonic Tone Complexes in the Auditory Nerve
- Author
-
Kale, Sushrut, Micheyl, Christophe, Heinz, Michael G., Moore, Brian C. J., editor, Patterson, Roy D., editor, Winter, Ian M., editor, Carlyon, Robert P., editor, and Gockel, Hedwig E, editor
- Published
- 2013
- Full Text
- View/download PDF
5. Modelling the Distortion Produced by Cochlear Compression
- Author
-
Patterson, Roy D., Ives, D. Timothy, Walters, Thomas C., Lyon, Richard F., Moore, Brian C. J., editor, Patterson, Roy D., editor, Winter, Ian M., editor, Carlyon, Robert P., editor, and Gockel, Hedwig E, editor
- Published
- 2013
- Full Text
- View/download PDF
6. Speech and Nonspeech Production in the Absence of the Vocal Tract
- Author
-
Thompson, Megan
- Subjects
Bioengineering ,Neurosciences ,Complex tones ,magnetoencephalography ,Speech ,Touchscreen ,Vocal Tract - Abstract
Sensory feedback plays a crucial role in speech production in both healthy individuals and in individuals with production-limited speech. However, the vast majority of research on the sensory consequences of speech production has focused on auditory feedback while relatively little is known about the role of vocal tract somatosensory feedback. The body of this dissertation investigates speech and nonspeech production in the absence of vocal tract somatosensory feedback by training subjects to use a touchscreen-based speech production platform. Contact with the touchscreen results in instant playback of a vowel or complex tone dependent on the location selected. Because the axes of the touchscreen are associated with continuous F2 and F1 frequencies, every possible vowel within a wide formant range can be produced. Participants with no initial knowledge of the mapping of screen areas to playback sounds were asked to reproduce auditory vowel or complex tonal targets. Their responses were evaluated for accuracy and consistency, and in some cases participants underwent functional neuroimaging via MEG during training. Following training, participants were capable of using the touchscreen to produce speech and nonspeech sounds in the absence of the vocal tract. Their increased accuracy and consistency as they learned to produce speech and nonspeech sounds indicates the development of new audiomotor maps, as does significant changes in their task-based functional neuroimaging over the course of training. While participants demonstrated learning in both speech and nonspeech production, the neural and behavioral differences indicated different learning processes. We hypothesize that these differences can at least partially be attributed to the presence of an existing audiomotor network for producing speech sounds. This would account for more rapid learning rates in the speech variants of the task, the presence of generalization in touchscreen-based speech production, and the neural similarities to vocal speech production in the touchscreen-produced speech sounds that presented differently in touchscreen-produced nonspeech sounds.
- Published
- 2018
7. Perception of noise-vocoded tone complexes: A time domain analysis based on an auditory filterbank model.
- Author
-
Shofner, William P., Morris, Hayley, and Mills, Mackenzie
- Subjects
- *
AUTOCORRELATION (Statistics) , *VOCODER , *FILTER banks , *ABSOLUTE pitch , *NOISE - Abstract
When a wideband harmonic tone complex (wHTC) is passed through a noise vocoder, the resulting sounds can have spectra with large peak-to-valley ratios, but little or no periodicity strength in the autocorrelation functions. We measured judgments of pitch strength for normal-hearing listeners for noise-vocoded wideband harmonic tone complexes (NV-wHTCs) relative to standard and anchor stimuli. The standard was a 1-channel NV-wHTC and the anchor was either the unprocessed wHTC or an infinitely-iterated rippled noise (IIRN). Although there is variability among individuals, the magnitude judgment functions obtained with the IIRN anchor suggest different listening strategies among individuals. In order to gain some insight into possible listening strategies, test stimuli were analyzed at the output of an auditory filterbank model based on gammatone filters. The weak periodicity strengths of NV-wHTCs observed in the stimulus autocorrelation functions are augmented at the output of the gammatone filterbank model. Six analytical models of pitch strength were evaluated based on summary correlograms obtained from the gammatone tone filterbank. The results of the filterbank analysis suggest that, contrary to the weak or absent periodicity strengths in the stimulus domain, temporal cues contribute to pitch strength perception of noise-vocoded harmonic stimuli such that listeners' judgments of pitch strength reflect a nonlinear, weighted average of the temporal information between the fine structure and the envelope. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. Temporal Pitch Sensitivity in an Animal Model: Psychophysics and Scalp Recordings
- Author
-
Richardson, Matthew L, Guerit, Francois, Gransier, Robin, Wouters, Jan, Carlyon, Robert P, and Middlebrooks, John C
- Subjects
Science & Technology ,INTRACOCHLEAR ELECTRICAL-STIMULATION ,COCHLEAR-IMPLANT ,Neurosciences ,cochlear implant ,cat ,AUDITORY BRAIN-STEM ,FREQUENCY-FOLLOWING RESPONSE ,frequency following response ,INFERIOR COLLICULUS NEURONS ,acoustic change complex ,harmonic complex ,Otorhinolaryngology ,RATE DISCRIMINATION ,DIFFERENCE LIMENS ,COMPLEX TONES ,EVOKED-POTENTIALS ,temporal pitch perception ,Neurosciences & Neurology ,Life Sciences & Biomedicine ,NORMAL-HEARING - Abstract
Cochlear implant (CI) users show limited sensitivity to the temporal pitch conveyed by electric stimulation, contributing to impaired perception of music and of speech in noise. Neurophysiological studies in cats suggest that this limitation is due, in part, to poor transmission of the temporal fine structure (TFS) by the brainstem pathways that are activated by electrical cochlear stimulation. It remains unknown, however, how that neural limit might influence perception in the same animal model. For that reason, we developed non-invasive psychophysical and electrophysiological measures of temporal (i.e., non-spectral) pitch processing in the cat. Normal-hearing (NH) cats were presented with acoustic pulse trains consisting of band-limited harmonic complexes that simulated CI stimulation of the basal cochlea while removing cochlear place-of-excitation cues. In the psychophysical procedure, trained cats detected changes from a base pulse rate to a higher pulse rate. In the scalp-recording procedure, the cortical-evoked acoustic change complex (ACC) and brainstem-generated frequency following response (FFR) were recorded simultaneously in sedated cats for pulse trains that alternated between the base and higher rates. The range of perceptual sensitivity to temporal pitch broadly resembled that of humans but was shifted to somewhat higher rates. The ACC largely paralleled these perceptual patterns, validating its use as an objective measure of temporal pitch sensitivity. The phase-locked FFR, in contrast, showed strong brainstem encoding for all tested pulse rates. These measures demonstrate the cat's perceptual sensitivity to pitch in the absence of cochlear-place cues and may be valuable for evaluating neural mechanisms of temporal pitch perception in the feline animal model of stimulation by a CI or novel auditory prostheses. ispartof: JARO-JOURNAL OF THE ASSOCIATION FOR RESEARCH IN OTOLARYNGOLOGY vol:23 issue:4 pages:491-512 ispartof: location:United States status: published
- Published
- 2022
9. Electrophysiological and behavioural processing of complex acoustic cues.
- Author
-
Mathew, Abin Kuruvilla, Purdy, Suzanne C., Welch, David, Pontoppidan, Niels H., and Rønne, Filip Marchman
- Subjects
- *
ELECTROPHYSIOLOGY , *ABSOLUTE pitch , *HEARING impaired , *INFORMATION processing , *ACOUSTIC stimulation , *AUDITORY perception - Abstract
Objectives: To examine behavioural and neural processing of pitch cues in adults with normal hearing (NH) and adults with sensorineural hearing loss (SNHL). Methods: All participants completed a test of behavioural sensitivity to pitch cues using the TFS1 test (Moore and Sek, 2009a). Cortical potentials (N1, P2 and acoustic change complex) were recorded in response to frequency shifted (deltaF) tone complexes in an 'ABA' pattern. Results: The SNHL group performed more poorly than the NH group for the TFS1 test. P2 was more reflective of pitch differences between the complexes than N1. The presence of acoustic change complex in response to the TFS transitions in the ABA stimulus varied with deltaF. Acoustic change complex amplitudes were reduced for the group with SNHL compared to controls. Conclusion: Behavioural performance and cortical responses reflect pitch processing depending on the salience of pitch cues. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
10. Simultaneous Consonance in Music Perception and Composition
- Author
-
Peter M. C. Harrison and Marcus T. Pearce
- Subjects
Pleasure ,Time Factors ,VIRTUAL-PITCH ,media_common.quotation_subject ,Culture ,COMPUTER-MODEL ,Musical ,consonance ,Models, Psychological ,perception ,01 natural sciences ,INTERVALS ,050105 experimental psychology ,HARMONY PERCEPTION ,TONAL CONSONANCE ,Argument ,Perception ,Phenomenon ,0103 physical sciences ,Cognitive dissonance ,Humans ,0501 psychology and cognitive sciences ,music ,010301 acoustics ,General Psychology ,media_common ,ACOUSTIC COMPONENT ,Computational model ,05 social sciences ,Recognition, Psychology ,Articles ,Musical tone ,dissonance ,CALCULATING SENSORY DISSONANCE ,Salient ,composition ,COMPLEX TONES ,Auditory Perception ,PHASE SENSITIVITY ,AUDITORY PERIPHERY ,Psychology ,Cognitive psychology - Abstract
Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
- Published
- 2020
- Full Text
- View/download PDF
11. The frequency following response (FFR) may reflect pitch-bearing information but is not a direct representation of pitch.
- Author
-
Gockel, Hedwig, Carlyon, Robert, Mehta, Anahita, Plack, Christopher, Gockel, Hedwig E, Carlyon, Robert P, and Plack, Christopher J
- Abstract
The frequency following response (FFR), a scalp-recorded measure of phase-locked brainstem activity, is often assumed to reflect the pitch of sounds as perceived by humans. In two experiments, we investigated the characteristics of the FFR evoked by complex tones. FFR waveforms to alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. In experiment 1, frequency-shifted complex tones, with all harmonics shifted by the same amount in Hertz, were presented diotically. Only the autocorrelation functions (ACFs) of the subtraction-FFR waveforms showed a peak at a delay shifted in the direction of the expected pitch shifts. This expected pitch shift was also present in the ACFs of the output of an auditory nerve model. In experiment 2, the components of a harmonic complex with harmonic numbers 2, 3, and 4 were presented either to the same ear ("mono") or the third harmonic was presented contralaterally to the ear receiving the even harmonics ("dichotic"). In the latter case, a pitch corresponding to the missing fundamental was still perceived. Monaural control conditions presenting only the even harmonics ("2 + 4") or only the third harmonic ("3") were also tested. Both the subtraction and the addition waveforms showed that (1) the FFR magnitude spectra for "dichotic" were similar to the sum of the spectra for the two monaural control conditions and lacked peaks at the fundamental frequency and other distortion products visible for "mono" and (2) ACFs for "dichotic" were similar to those for "2 + 4" and dissimilar to those for "mono." The results indicate that the neural responses reflected in the FFR preserve monaural temporal information that may be important for pitch, but provide no evidence for any additional processing over and above that already present in the auditory periphery, and do not directly represent the pitch of dichotic stimuli. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
12. WAVEFORM CIRCULARITY FROM ADDED SAWTOOTH AND SQUARE WAVE ACOUSTICAL SIGNALS.
- Author
-
FUGIEL, BOGUSŁAW
- Subjects
- *
MUSIC -- Mathematics , *ALGORITHMS , *MUSICAL pitch , *TONE color (Music theory) , *HARMONICS (Music theory) - Abstract
AN ALGORITHM IS GIVEN WITH THE PURPOSE OF BUILDING complex tones for which waveform circularity is shown and for which pitch circularity may be heard. The tones consist of sawtooth and square acoustical waves. It is proposed that a perceptual illusion, analogous to those described by Shepard (1964), can be then created. The signal may comprise an infinite number of harmonics, as in the case of natural sounds. Using the equitempered scale, an appropriate mathematical formula is given. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
13. Pitch, harmonicity and concurrent sound segregation: Psychoacoustical and neurophysiological findings
- Author
-
Micheyl, Christophe and Oxenham, Andrew J.
- Subjects
- *
HARMONICS (Music theory) , *PSYCHOACOUSTICS , *NEUROPHYSIOLOGY , *EVOKED potentials (Electrophysiology) , *AUDITORY perception , *HEARING - Abstract
Abstract: Harmonic complex tones are a particularly important class of sounds found in both speech and music. Although these sounds contain multiple frequency components, they are usually perceived as a coherent whole, with a pitch corresponding to the fundamental frequency (F0). However, when two or more harmonic sounds occur concurrently, e.g., at a cocktail party or in a symphony, the auditory system must separate harmonics and assign them to their respective F0s so that a coherent and veridical representation of the different sounds sources is formed. Here we review both psychophysical and neurophysiological (single-unit and evoked-potential) findings, which provide some insight into how, and how well, the auditory system accomplishes this task. A survey of computational models designed to estimate multiple F0s and segregate concurrent sources is followed by a review of the empirical literature on the perception and neural coding of concurrent harmonic sounds, including vowels, as well as findings obtained using single complex tones with mistuned harmonics. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
14. Left hemisphere specialization for duration discrimination of musical and speech sounds
- Author
-
Brancucci, Alfredo, D’Anselmo, Anita, Martello, Federica, and Tommasi, Luca
- Subjects
- *
DICHOTIC listening tests , *HUMAN behavior , *SOUNDS , *CONSONANTS - Abstract
Abstract: Hemispheric asymmetries for processing duration of non-verbal and verbal sounds were investigated in 60 right-handed subjects. Two dichotic tests with attention directed to one ear were used, one with complex tones and one with consonant–vowel syllables. Stimuli had three possible durations: 350, 500, and 650ms. Subjects judged whether the duration of a probe was same or different compared to the duration of the target presented before it. Target and probe were part of two dichotic pairs presented with 1s interstimulus interval and occurred on the same side. Dependent variables were reaction time and accuracy. Results showed a significant right ear advantage for both dependent variables with both complex tones and consonant–vowel syllables. This study provides behavioural evidence of a left hemisphere specialization for duration perception of both musical and speech sounds in line with the current view based on a parameter – rather than domain-specific structuring of hemispheric perceptual asymmetries. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
15. Decrease of functional coupling between left and right auditory cortices during dichotic listening: An electroencephalography study
- Author
-
Brancucci, A., Babiloni, C., Vecchio, F., Galderisi, S., Mucci, A., Tecchio, F., Romani, G.L., and Rossini, P.M.
- Subjects
- *
DIAGNOSIS of brain diseases , *ELECTROENCEPHALOGRAPHY , *ELECTROPHYSIOLOGY , *VISUAL evoked response - Abstract
Abstract: The present study focused on functional coupling between human bilateral auditory cortices and on possible influence of right over left auditory cortex during dichotic listening of complex non-verbal tones having near (competing) compared with distant non-competing fundamental frequencies. It was hypothesized that dichotic stimulation with competing tones would induce a decline of functional coupling between the two auditory cortices, as revealed by a decrease of electroencephalography coherence and an increase of directed transfer function from right (specialized for the present stimulus material) to left auditory cortex. Electroencephalograph was recorded from T3 and T4 scalp sites, overlying respectively left and right auditory cortices, and from Cz scalp site (vertex) for control purposes. Event-related coherence between T3 and T4 scalp sites was significantly lower for all electroencephalography bands of interest during dichotic listening of competing than non-competing tone pairs. This was a specific effect, since event-related coherence did not differ in a monotic control condition. Furthermore, event-related coherence between T3 and Cz and between T4 and Cz scalp sites showed no significant effects. Conversely, the directed transfer function results showed negligible influence at group level of right over left auditory cortex during dichotic listening. These results suggest a decrease of functional coupling between bilateral auditory cortices during competing dichotic stimuli as a possible neural substrate for the lateralization of auditory stimuli during dichotic listening. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
16. Right hemisphere specialization for intensity discrimination of musical and speech sounds
- Author
-
Brancucci, Alfredo, Babiloni, Claudio, Rossini, Paolo Maria, and Romani, Gian Luca
- Subjects
- *
AUDITORY perception , *HUMAN behavior , *SPEECH education , *OHIO Tests of Articulation & Perception of Sounds - Abstract
Abstract: Sound intensity is the primary and most elementary feature of auditory signals. Its discrimination plays a fundamental role in different behaviours related to auditory perception such as sound source localization, motion detection, and recognition of speech sounds. This study was aimed at investigating hemispheric asymmetries for processing intensity of complex tones and consonant-vowel syllables. Forty-four right-handed non-musicians were presented with two dichotic matching-to-sample tests with focused attention: one with complex tones of different intensities (musical test) and the other with consonant-vowel syllables of different intensities (speech test). Intensity differences (60, 70, and 80dBA) were obtained by altering the gain of a synthesized harmonic tone (260Hz fundamental frequency) and of a consonant-vowel syllable (/ba/) recorded from a natural voice. Dependent variables were accuracy and reaction time. Results showed a significant clear-cut left ear advantage in both tests for both dependent variables. A monaural control experiment ruled out possible attentional biases. This study provides behavioural evidence of a right hemisphere specialization for the perception of the intensity of musical and speech sounds in healthy subjects. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
17. Auditory streaming based on temporal structure in hearing-impaired listeners
- Author
-
Stainsby, Thomas H., Moore, Brian C. J., and Glasberg, Brian R.
- Subjects
- *
ACOUSTIC streaming , *HEARING impaired , *ANALYSIS of variance , *SOUND waves , *STANDARD deviations - Abstract
The influence of temporal cues on sequential stream segregation was investigated using five elderly hearing-impaired listeners. In experiment 1, an alternating pattern of A and B tones was used. Each tone was a harmonic complex with a 100-Hz fundamental, with one of three passbands (1250–2500, 1768–3636, or 2500–5000 Hz) and one of three component-phase relationships (cosine, alternating, or random). The complexes had an overall level of 96 dB SPL. The detection of a change in relative timing of the A and B tones was measured in a two-interval-forced-choice paradigm. The sequence in one interval remained isochronous while the sequence in the other started isochronously but became increasingly irregular with the addition of a cumulative delay between the A and B tones. Component phase relationship and passband difference both had significant effects on the minimum detectable delay, indicating that temporal structure produced obligatory stream segregation. In experiment 2, subjects continuously reported whether tones presented in a 30-s ABA–ABA– sequence were perceived as segregated or integrated. Differences in component phase between A and B significantly increased perceived segregation, but passband did not. In conclusion, stream segregation due to differences in temporal structure is robust in elderly subjects with cochlear hearing loss and comparable to that found previously in young normally hearing subjects. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
18. Inhibition of auditory cortical responses to ipsilateral stimuli during dichotic listening: evidence from magnetoencephalography.
- Author
-
Brancucci, Alfredo, Babiloni, Claudio, Babiloni, Fabio, Galderisi, Silvana, Mucci, Armida, Tecchio, Franca, Zappasodi, Filippo, Pizzella, Vittorio, Romani, Gian Luca, and Rossini, Paolo Maria
- Subjects
- *
AUDITORY cortex , *DICHOTIC listening tests , *AUDIOMETRY , *MAGNETOENCEPHALOGRAPHY , *AUDITORY evoked response , *MAGNETIC fields - Abstract
The present magnetoencephalography (MEG) study on auditory evoked magnetic fields (AEFs) was aimed at verifying whether during dichotic listening the contralateral auditory pathway inhibits the ipsilateral one, as suggested by behavioural and patient studies. Ten healthy subjects were given a randomized series of three complex tones (261,293 and 391 Hz, 500 ms duration), which were delivered monotically and dichotically with different intensities [60, 70 or 80 dBA (audio decibels)]. MEG data were recorded from the right auditory cortex. Results showed that the M100 amplitude over the right auditory cortex increased progressively when tones of increasing intensity were provided at the ipsilateral (right) ear. This effect on M100 was abolished when a concurrent tone of constant intensity was delivered dichotically at the contralateral (left) ear, suggesting that the contralateral pathway inhibited the ipsilateral one. The ipsilateral inhibition was present only when the contralateral tone fundamental frequency was similar to the ipsilateral tone. It was proposed that the occlusion mechanism would be exerted in cortical auditory areas as the dichotic effects were observed at M100 but not M50 component. This is the first evidence showing a neurophysiological inhibition driven by the contralateral auditory pathway over the ipsilateral one during dichotic listening. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
19. Mismatch negativity to single and multiple pitch-deviant tones in regular and pseudo-random complex tone sequences
- Author
-
Vaz Pato, M., Jones, S.J., Perez, N., and Sprague, L.
- Subjects
- *
MUSICAL instruments , *SOUND , *MUSICAL notation - Abstract
Objectives: To determine whether the process responsible for the mismatch negativity (MMN) might be involved in the analysis of temporal sound patterns for information.Methods: Synthesized musical instrument tones of ‘clarinet’ timbre were delivered in a continuous sequence at 16 tones/s, such that there was virtually no N1 potential to each individual tone. The standard sequence comprised 4 or 5 adjacent notes of the diatonic scale, presented either as a regularly repeated, rising pattern or pseudo-randomly. The deviant stimuli were 1–5 consecutive tones of higher pitch than the standards.Results: A MMN was evoked by a single deviant tone, 1 or 5 semitones above the pitch range of the standards. The response to the 5-semitone deviant was significantly larger (mean of 7.3 μV) when the standard pattern was regular as compared with pseudo-random. The MMN latency, on the other hand, was only influenced by the magnitude of pitch deviation. A second MMN was evoked by a second deviant tone, immediately (SOA 62.5 ms) following the first. Further consecutive MMNs were not consistently evoked.Conclusions: The large amplitude of these MMNs can be attributed to the use of complex tones, continuous presentation and a rapid rate of pitch changes, such that no waveform subtraction was required. Over and above the probability with which each individual tone occurs in the standard sequence, the mismatch process is influenced by its temporal structure, i.e. can be regarded as a temporal pattern analyzer. Contrary to the findings of some other groups, we found that two consecutive deviants can evoke an MMN, even at high rates of presentation such that both occur within the postulated ‘temporal window of integration’ of ca. 170 ms. These findings suggest that the mismatch process might be involved in the extraction of sequential information from repetitive and non-repetitive sound patterns. [Copyright &y& Elsevier]
- Published
- 2002
- Full Text
- View/download PDF
20. High-resolution frequency tuning but not temporal coding in the human cochlea
- Author
-
Christian Desloovere, Eric Verschooten, and Philip X. Joris
- Subjects
Life Sciences & Biomedicine - Other Topics ,PITCH PERCEPTION ,Male ,High resolution ,Monkeys ,SPEECH-PERCEPTION ,01 natural sciences ,Macaque ,0302 clinical medicine ,Nerve Fibers ,Short Reports ,Hearing ,Animal Cells ,Medicine and Health Sciences ,Biology (General) ,010301 acoustics ,Mammals ,Neurons ,Coding Mechanisms ,biology ,General Neuroscience ,Resolution (electron density) ,Eukaryota ,Limiting ,Animal Models ,Healthy Volunteers ,Cochlea ,Sound ,Experimental Organism Systems ,Vertebrates ,Inner Ear ,Female ,Anatomy ,Cellular Types ,General Agricultural and Biological Sciences ,Life Sciences & Biomedicine ,Primates ,Adult ,Biochemistry & Molecular Biology ,AUDITORY-NERVE ,QH301-705.5 ,MARMOSET CALLITHRIX-JACCHUS ,Research and Analysis Methods ,Rodents ,General Biochemistry, Genetics and Molecular Biology ,NEURAL PHASE-LOCKING ,GUINEA-PIG ,03 medical and health sciences ,Young Adult ,biology.animal ,0103 physical sciences ,OTOACOUSTIC EMISSIONS ,Old World monkeys ,otorhinolaryngologic diseases ,Animals ,Humans ,Biology ,Cochlear Nerve ,Computational Neuroscience ,Science & Technology ,General Immunology and Microbiology ,PATHOLOGICAL HUMAN ,business.industry ,Organisms ,Biology and Life Sciences ,Computational Biology ,Pattern recognition ,Cell Biology ,Macaca mulatta ,Acoustic Stimulation ,Ears ,Cellular Neuroscience ,COMPLEX TONES ,Amniotes ,Chinchillas ,Animal Studies ,Artificial intelligence ,business ,Head ,030217 neurology & neurosurgery ,Coding (social sciences) ,FINE-STRUCTURE ,Neuroscience - Abstract
Frequency tuning and phase-locking are two fundamental properties generated in the cochlea, enabling but also limiting the coding of sounds by the auditory nerve (AN). In humans, these limits are unknown, but high resolution has been postulated for both properties. Electrophysiological recordings from the AN of normal-hearing volunteers indicate that human frequency tuning, but not phase-locking, exceeds the resolution observed in animal models., Author summary The coding of sounds by the cochlea depends on two primary properties: frequency selectivity, which refers to the ability to separate sounds into their different frequency components, and phase-locking, which refers to the neural coding of the temporal waveform of these components. These properties have been well characterized in animals using neurophysiological recordings from single neurons of the auditory nerve (AN), but this approach is not feasible in humans. As a result, there is considerable controversy as to how these two properties may differ between humans and the small animals typically used in neurophysiological studies. It has been proposed that humans excel both in frequency selectivity and in the range of frequencies over which they have phase-locking. We developed a technique to quantify these properties using mass potentials from the AN, recorded via the middle ear in human volunteers with normal hearing. We find that humans have unusually sharp frequency tuning but that the upper frequency limit of phase-locking is at best similar to—and more likely lower than—that of the nonhuman animals conventionally used in experiments.
- Published
- 2018
21. Entracking as a Brain Stem Code for Pitch: The Butte Hypothesis
- Author
-
Joris, Philip X, VanDijk, P, Baskent, D, Gaudrain, E, DeKleine, E, Wagner, A, and Lanting, C
- Subjects
Life Sciences & Biomedicine - Other Topics ,Science & Technology ,AUDITORY-NERVE ,COMPUTER-MODEL ,Neurosciences ,CAT ,VIRTUAL PITCH ,Research & Experimental Medicine ,NEURAL SYNCHRONIZATION ,Temporal ,Brain stem ,ANTEROVENTRAL COCHLEAR NUCLEUS ,Medicine, Research & Experimental ,Autocorrelation ,ITERATED RIPPLED NOISE ,COMPLEX TONES ,PHASE SENSITIVITY ,Neurosciences & Neurology ,Phase-locking ,Life Sciences & Biomedicine ,Biology ,TEMPORAL REPRESENTATION - Abstract
The basic nature of pitch is much debated. A robust code for pitch exists in the auditory nerve in the form of an across-fiber pooled interspike interval (ISI) distribution, which resembles the stimulus autocorrelation. An unsolved question is how this representation can be "read out" by the brain. A new view is proposed in which a known brain-stem property plays a key role in the coding of periodicity, which I refer to as "entracking", a contraction of "entrained phase-locking". It is proposed that a scalar rather than vector code of periodicity exists by virtue of coincidence detectors that code the dominant ISI directly into spike rate through entracking. Perfect entracking means that a neuron fires one spike per stimulus-waveform repetition period, so that firing rate equals the repetition frequency. Key properties are invariance with SPL and generalization across stimuli. The main limitation in this code is the upper limit of firing (~ 500 Hz). It is proposed that entracking provides a periodicity tag which is superimposed on a tonotopic analysis: at low SPLs and fundamental frequencies > 500 Hz, a spectral or place mechanism codes for pitch. With increasing SPL the place code degrades but entracking improves and first occurs in neurons with low thresholds for the spectral components present. The prediction is that populations of entracking neurons, extended across characteristic frequency, form plateaus ("buttes") of firing rate tied to periodicity. ispartof: PHYSIOLOGY, PSYCHOACOUSTICS AND COGNITION IN NORMAL AND IMPAIRED HEARING vol:894 pages:347-354 ispartof: location:United States status: published
- Published
- 2016
22. The Frequency Following Response (FFR) May Reflect Pitch-Bearing Information But is Not a Direct Representation of Pitch
- Author
-
Robert P. Carlyon, Anahita H. Mehta, Christopher J. Plack, and Hedwig E. Gockel
- Subjects
Sound localization ,Speech recognition ,Acoustics ,Guinea Pigs ,Models, Neurological ,Monaural ,01 natural sciences ,Article ,Dichotic Listening Tests ,03 medical and health sciences ,0302 clinical medicine ,dichotic presentation ,0103 physical sciences ,Evoked Potentials, Auditory, Brain Stem ,Animals ,Humans ,Sound Localization ,Pitch Perception ,010301 acoustics ,Cochlear Nerve ,Perceptual Distortion ,Dichotic listening ,Fundamental frequency ,Frequency following response ,complex tones ,Sensory Systems ,Otorhinolaryngology ,Acoustic Stimulation ,monaural temporal information ,Harmonic ,Missing fundamental ,Pitch shift ,Psychology ,030217 neurology & neurosurgery ,Brain Stem - Abstract
The frequency following response (FFR), a scalp-recorded measure of phase-locked brainstem activity, is often assumed to reflect the pitch of sounds as perceived by humans. In two experiments, we investigated the characteristics of the FFR evoked by complex tones. FFR waveforms to alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. In experiment 1, frequency-shifted complex tones, with all harmonics shifted by the same amount in Hertz, were presented diotically. Only the autocorrelation functions (ACFs) of the subtraction-FFR waveforms showed a peak at a delay shifted in the direction of the expected pitch shifts. This expected pitch shift was also present in the ACFs of the output of an auditory nerve model. In experiment 2, the components of a harmonic complex with harmonic numbers 2, 3, and 4 were presented either to the same ear (“mono”) or the third harmonic was presented contralaterally to the ear receiving the even harmonics (“dichotic”). In the latter case, a pitch corresponding to the missing fundamental was still perceived. Monaural control conditions presenting only the even harmonics (“2 + 4”) or only the third harmonic (“3”) were also tested. Both the subtraction and the addition waveforms showed that (1) the FFR magnitude spectra for “dichotic” were similar to the sum of the spectra for the two monaural control conditions and lacked peaks at the fundamental frequency and other distortion products visible for “mono” and (2) ACFs for “dichotic” were similar to those for “2 + 4” and dissimilar to those for “mono.” The results indicate that the neural responses reflected in the FFR preserve monaural temporal information that may be important for pitch, but provide no evidence for any additional processing over and above that already present in the auditory periphery, and do not directly represent the pitch of dichotic stimuli.
- Published
- 2011
23. Inhibition of auditory cortical responses to ipsilateral stimuli during dichotic listening: evidence from magnetoencephalography
- Author
-
Armida Mucci, Paolo Maria Rossini, Franca Tecchio, Alfredo Brancucci, Vittorio Pizzella, Gian Luca Romani, Claudio Babiloni, Filippo Zappasodi, Fabio Babiloni, Silvana Galderisi, Brancucci, A., Babiloni, C., Babiloni, F., Galderisi, Silvana, Mucci, Armida, Tecchio, F., Zappasodi, F., Pizzella, V., Romani, G. L., and Rossini, P. M.
- Subjects
Adult ,Male ,medicine.medical_specialty ,auditory evoked magnetic fields (aefs) ,auditory pathways ,complex tones ,m100 ,m50 ,right auditory cortex ,Auditory area ,Audiology ,Auditory cortex ,Dichotic Listening Tests ,Tone (musical instrument) ,medicine ,Humans ,M50 ,complex tone ,Decibel ,Auditory Cortex ,Auditory masking ,medicine.diagnostic_test ,Dichotic listening ,General Neuroscience ,Magnetoencephalography ,Neural Inhibition ,Neurophysiology ,auditory evoked magnetic fields (AEFs) ,Acoustic Stimulation ,Evoked Potentials, Auditory ,M100 ,Female ,Psychology ,auditory pathway - Abstract
The present magnetoencephalography (MEG) study on auditory evoked magnetic fields (AEFs) was aimed at verifying whether during dichotic listening the contralateral auditory pathway inhibits the ipsilateral one, as suggested by behavioural and patient studies. Ten healthy subjects were given a randomized series of three complex tones (261, 293 and 391 Hz, 500 ms duration), which were delivered monotically and dichotically with different intensities [60, 70 or 80 dBA (audio decibels)]. MEG data were recorded from the right auditory cortex. Results showed that the M100 amplitude over the right auditory cortex increased progressively when tones of increasing intensity were provided at the ipsilateral (right) ear. This effect on M100 was abolished when a concurrent tone of constant intensity was delivered dichotically at the contralateral (left) ear, suggesting that the contralateral pathway inhibited the ipsilateral one. The ipsilateral inhibition was present only when the contralateral tone fundamental frequency was similar to the ipsilateral tone. It was proposed that the occlusion mechanism would be exerted in cortical auditory areas as the dichotic effects were observed at M100 but not M50 component. This is the first evidence showing a neurophysiological inhibition driven by the contralateral auditory pathway over the ipsilateral one during dichotic listening.
- Published
- 2004
- Full Text
- View/download PDF
24. Perception of pure tones and iterated rippled noise for normal hearing and cochlear implant users
- Author
-
Penninger, Richard, Chien, Wade, Jiradejvong, Patpong, Boeke, Emily, Carver, Courtney, Limb, Charles, and Friedland, David
- Subjects
PHONEME RECOGNITION ,PITCH ,Adult ,Male ,Sound Spectrography ,Time Factors ,Clinical Sciences ,Bioengineering ,TEMPORAL FINE-STRUCTURE ,PULSE TRAINS ,Pitch Discrimination ,LISTENERS ,Audiometry ,cochlear implants ,Clinical Research ,Medicine and Health Sciences ,MUSIC PERCEPTION ,Humans ,Psychology ,iterated rippled noise ,Correction of Hearing Impairment ,SPEECH RECOGNITION ,SPECTRAL-RIPPLE ,Hearing Loss ,melody recognition ,Aged ,Assistive Technology ,Prevention ,Rehabilitation ,Middle Aged ,Cochlear Implantation ,Electric Stimulation ,Recognition ,Persons With Hearing Impairments ,pitch perception ,Acoustic Stimulation ,Otorhinolaryngology ,DISCRIMINATION ,Case-Control Studies ,COMPLEX TONES ,Linear Models ,Female ,Music ,Pure-Tone - Abstract
Cochlear Implant (CI) users typically perform poorly on musical tasks, especially those based on pitch ranking and melody recognition. It was hypothesized that CI users would demonstrate deterioration in performance for a pitch ranking and a melody recognition task presented with iterated rippled noise (IRN) in comparison to pure tones (PT). In Addition, it was hypothesized that normal hearing (NH) listeners would show fewer differences in performance between IRN and PT for these two tasks. In this study, the ability of CI users and NH subjects to rank pitches and to identify melodies created with IRN and PT was assessed in free field in a sound-isolated room. CI subjects scored significantly above chance level with PT stimuli in both tasks. With IRN stimuli their performance was around chance level. NH subjects scored significantly above chance level in both tasks and with all stimuli. NH subjects performed significantly better than CI subjects in both tasks. These results illustrate the difficulties of CI subjects to rank pitches and to identify melodies.
- Published
- 2013
25. The Complex Tones of East/Southeast Asian Languages: Current Challenges for Typology and Modelling
- Author
-
Michaud, Alexis, Michaud, Alexis, Centre d'études français sur la Chine contemporaine (CEFC), Centre National de la Recherche Scientifique (CNRS), Langues et civilisations à tradition orale (LACITO), and Université Sorbonne Nouvelle - Paris 3-Institut National des Langues et Civilisations Orientales (Inalco)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
tonogenesis ,Asian tones ,phonation-type characteristics ,[SHS.LANGUE]Humanities and Social Sciences/Linguistics ,tonal typology ,level tones ,complex tones ,[SHS.LANGUE] Humanities and Social Sciences/Linguistics ,registrogenesis ,diachrony of tone systems - Abstract
International audience; In some of the tone systems of East and Southeast Asian languages, linguistic tone cannot simply be equated with pitch; some tones have phonation-type characteristics as part of their phonological definition; and there is no compelling evidence for analyzing tonal contours into sequences of levels. Salient findings are reviewed, first from a synchronic perspective, then from a diachronic one, to bring out facts that are relevant for tonal typology and for evolutionary approaches to phonology.
- Published
- 2012
26. Pitch perception using a Semitone mapping to improve music representation in Nucleus Cochlear Implants
- Author
-
Omran, S A, Büchler, M, Lai, W K, Dillier, N, and University of Zurich
- Subjects
Virtual Channels ,Semitone Mapping ,610 Medicine & health ,10045 Clinic for Otorhinolaryngology ,Pitch ranking ,Complex Tones ,Music - Published
- 2008
- Full Text
- View/download PDF
27. An investigation on similarities between pitch perception and localization of harmonic complex tones
- Author
-
Allan, Jon
- Subjects
genetic structures ,Localization ,Perceptions ,Harmonic ,Samhälls ,Social Behaviour Law ,beteendevetenskap ,sense organs ,complex tones ,psychological phenomena and processes ,Pitch perception ,juridik - Abstract
This paper investigates two types of human auditory perceptions, localization and pitch perception, to see if there could be any common mechanism underlying both abilities. A brief description of the theories will be listed. From those theories it will be shown that, for lower frequencies, both perceptions seem to be based on timing information from the hair cells in the cochlea. One experiment will also be outlined and performed to investigate our ability to localize harmonic complex tones, depending on how many partials the tones are built of. The results from the experiment are compared with earlier reported experiments on pitch perception and some interesting similarities are enlightened that implies there could be a common mechanism underlying both perceptions. However, more subjects are needed to be able to draw reliable conclusions. Validerat; 20101217 (root)
- Published
- 2007
28. The role of temporal fine structure information for the low pitch of high-frequency complex tones
- Author
-
Santurette, Sébastien, Dau, Torsten, Santurette, Sébastien, and Dau, Torsten
- Abstract
The fused low pitch evoked by complex tones containing only unresolved high-frequency components demonstrates the ability of the human auditory system to extract pitch using a temporal mechanism in the absence of spectral cues. However, the temporal features used by such a mechanism have been a matter of debate. For stimuli with components lying exclusively in high-frequency spectral regions, the slowly varying temporal envelope of sounds is often assumed to be the only information contained in auditory temporal representations, and it has remained controversial to what extent the fast amplitude fluctuations, or temporal fine structure (TFS), of the conveyed signal can be processed. Using a pitch-matching paradigm, the present study found that the low pitch of inharmonic transposed tones with unresolved components was consistent with the timing between the most prominent TFS maxima in their waveforms, rather than envelope maxima. Moreover, envelope cues did not take over as the absolute frequency or rank of the lowest component was raised and TFS cues thus became less effective. Instead, the low pitch became less salient. This suggests that complex pitch perception does not rely on envelope coding as such, and that TFS representation might persist at higher frequencies than previously thought.
- Published
- 2011
29. Effects of relative phases on pitch and timbre in the piano bass range
- Author
-
Galembo, A., Askenfelt, Anders, Cuddy, L. L., Russo, F. A., Galembo, A., Askenfelt, Anders, Cuddy, L. L., and Russo, F. A.
- Abstract
Piano bass tones raise questions related to the perception of multicomponent, inharmonic tones. In this study, the influence of the relative phases among partials on pitch and timbre was investigated for synthesized bass tones with piano-like inharmonicity. Three sets of bass tones (A0 = 27.5 Hz, 100 partials, flat spectral envelope) were generated; harmonic, low inharmonic, and high inharmonic. For each set, five starting phase relations among partials were applied; sine phases, alternate (sine/cosine) phases, random phases, Schroeder phases, and negative Schroeder phases. The pitch and timbre of the tones were influenced markedly by the starting phases. Listening tests showed that listeners are able to discriminate between tones having different starting phase relations, and also that the pitch could be changed by manipulating the relative phases (octave, fifth, major third). A piano-like inharmonicity gives a characteristic randomizing effect of the phase relations over time in tones starting with nonrandom phase relations. A measure of the regularity of the phase differences between adjacent partials is suggested for quantifying this randomization process. The observed phase effects might be of importance in synthesizing, recording, and reproducing piano music., QC 20100525
- Published
- 2001
- Full Text
- View/download PDF
30. An experimental investigation of the human perception thresholds of complex low frequency sounds
- Subjects
Complex Tones ,Low Frequency Noise ,Perception Threshold - Abstract
An experiment has been conducted to investigate the characteristics of human perception of complex low frequency noise. Fifteen subjects took part in the experiment in which the perception thresholds of complex tones consisting of two frequency components and pure tones were measured. The frequencies of the tones used in the experiment were 25, 31.5, 40 and 50 Hz. For the complex tones, the perception thresholds were determined by changing the sound pressure level of a frequency component while the sound pressure level of another frequency component was fixed at 5 or 10 dB below the pure tone perception threshold at the corresponding frequency. It was found that the frequency component at the level below the pure tone thresholds might have an effect on the perception of the complex tone.
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.