42 results on '"Hannon, Erin E."'
Search Results
2. Acoustic and Semantic Processing of Auditory Scenes in Children with Autism Spectrum Disorders
- Author
-
Yerkes, Breanne D., Vanden Bosch der Nederlanden, Christina M., Beasley, Julie F., Hannon, Erin E., and Snyder, Joel S.
- Published
- 2024
- Full Text
- View/download PDF
3. Sustained Musical Beat Perception Develops into Late Childhood and Predicts Phonological Abilities
- Author
-
Nave, Karli M., Snyder, Joel S., and Hannon, Erin E.
- Abstract
Sensitivity to auditory rhythmic structures in music and language is evident as early as infancy, but performance on beat perception tasks is often well below adult levels and improves gradually with age. While some research has suggested the ability to perceive musical beat develops early, even in infancy, it remains unclear whether adult-like perception of musical beat is present in children. The capacity to sustain an internal sense of the beat is critical for various rhythmic musical behaviors, yet very little is known about the development of this ability. In this study, 223 participants ranging in age from 4 to 23 years from the Las Vegas, Nevada, community completed a musical beat discrimination task, during which they first listened to a strongly metrical musical excerpt and then attempted to sustain their perception of the musical beat while listening to a repeated, beat-ambiguous rhythm for up to 14.4 s. They then indicated whether a drum probe matched or did not match the beat. Results suggested that the ability to identify the matching probe improved throughout middle childhood (8-9 years) and did not reach adult-like levels until adolescence (12-14 years). Furthermore, scores on the beat perception task were positively related to phonological processing, after accounting for age, short-term memory, and music and dance training. This study lends further support to the notion that children's capacity for beat perception is not fully developed until adolescence and suggests we should reconsider assumptions of musical beat mastery by infants and young children.
- Published
- 2023
- Full Text
- View/download PDF
4. Quantifying Sources of Variability in Infancy Research Using the Infant-Directed-Speech Preference
- Author
-
Frank, Michael C, Alcock, Katherine Jane, Arias-Trejo, Natalia, Aschersleben, Gisa, Baldwin, Dare, Barbu, Stephanie, Bergelson, Elika, Bergmann, Christina, Black, Alexis K, Blything, Ryan, Bohland, Maximilian P, Bolitho, Petra, Borovsky, Arielle, Brady, Shannon M, Braun, Bettina, Brown, Anna, Byers-Heinlein, Krista, Campbell, Linda E, Cashon, Cara, Choi, Mihye, Christodoulou, Joan, Cirelli, Laura K, Conte, Stefania, Cordes, Sara, Cox, Christopher, Cristia, Alejandrina, Cusack, Rhodri, Davies, Catherine, de Klerk, Maartje, Delle Luche, Claire, de Ruiter, Laura, Dinakar, Dhanya, Dixon, Kate C, Durier, Virginie, Durrant, Samantha, Fennell, Christopher, Ferguson, Brock, Ferry, Alissa, Fikkert, Paula, Flanagan, Teresa, Floccia, Caroline, Foley, Megan, Fritzsche, Tom, Frost, Rebecca LA, Gampe, Anja, Gervain, Judit, Gonzalez-Gomez, Nayeli, Gupta, Anna, Hahn, Laura E, Hamlin, J Kiley, Hannon, Erin E, Havron, Naomi, Hay, Jessica, Hernik, Mikolaj, Hohle, Barbara, Houston, Derek M, Howard, Lauren H, Ishikawa, Mitsuhiko, Itakura, Shoji, Jackson, Iain, Jakobsen, Krisztina V, Jarto, Marianna, Johnson, Scott P, Junge, Caroline, Karadag, Didar, Kartushina, Natalia, Kellier, Danielle J, Keren-Portnoy, Tamar, Klassen, Kelsey, Kline, Melissa, Ko, Eon-Suk, Kominsky, Jonathan F, Kosie, Jessica E, Kragness, Haley E, Krieger, Andrea AR, Krieger, Florian, Lany, Jill, Lazo, Roberto J, Lee, Michelle, Leservoisier, Chloe, Levelt, Claartje, Lew-Williams, Casey, Lippold, Matthias, Liszkowski, Ulf, Liu, Liquan, Luke, Steven G, Lundwall, Rebecca A, Cassia, Viola Macchi, Mani, Nivedita, Marino, Caterina, Martin, Alia, Mastroberardino, Meghan, Mateu, Victoria, Mayor, Julien, Menn, Katharina, Michel, Christine, Moriguchi, Yusuke, Morris, Benjamin, Nave, Karli M, and Nazzi, Thierry
- Subjects
language acquisition ,speech perception ,infant-directed speech ,reproducibility ,experimental methods ,open data ,open materials ,preregistered ,Behavioral and Social Science ,Basic Behavioral and Social Science ,Pediatric ,Clinical Research - Abstract
Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure.
- Published
- 2020
5. Misophonia reactions in the general population are correlated with strong emotional reactions to other everyday sensory–emotional experiences.
- Author
-
Mednicoff, Solena D., Barashy, Sivan, Vollweiler, David J., Benning, Stephen D., Snyder, Joel S., and Hannon, Erin E.
- Subjects
MISOPHONIA ,EMOTIONAL experience ,SENSORIMOTOR integration ,STIMULUS & response (Psychology) ,PHENOTYPES - Abstract
Misophonic experiences are common in the general population, and they may shed light on everyday emotional reactions to multi-modal stimuli. We performed an online study of a non-clinical sample to understand the extent to which adults who have misophonic reactions are generally reactive to a range of audio-visual emotion-inducing stimuli. We also hypothesized that musicality might be predictive of one's emotional reactions to these stimuli because music is an activity that involves strong connections between sensory processing and meaningful emotional experiences. Participants completed self-report scales of misophonia and musicality. They also watched videos meant to induce misophonia, autonomous sensory meridian response (ASMR) and musical chills, and were asked to click a button whenever they had any emotional reaction to the video. They also rated the emotional valence and arousal of each video. Reactions to misophonia videos were predicted by reactions to ASMR and chills videos, which could indicate that the frequency with which individuals experience emotional responses varies similarly across both negative and positive emotional contexts. Musicality scores were not correlated with measures of misophonia. These findings could reflect a general phenotype of stronger emotional reactivity to meaningful sensory inputs. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Synchronization and Continuation Tapping to Complex Meters
- Author
-
Snyder, Joel S., Hannon, Erin E., Large, Edward W., and Christiansen, Morten H.
- Published
- 2006
- Full Text
- View/download PDF
7. Tuning in to Musical Rhythms: Infants Learn More Readily Than Adults
- Author
-
Hannon, Erin E., Trehub, Sandra E., and Purves, Dale
- Published
- 2005
8. Babies know bad dancing when they see it: Older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays
- Author
-
Hannon, Erin E., Schachner, Adena, and Nave-Blodgett, Jessica E.
- Published
- 2017
- Full Text
- View/download PDF
9. Children Use Object-Level Category Knowledge to Detect Changes in Complex Auditory Scenes
- Author
-
Vanden Bosch der Nederlanden, Christina M., Snyder, Joel S., and Hannon, Erin E.
- Abstract
Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a change deafness paradigm and an object-encoding task to investigate how children (6, 8, and 10 years of age) and adults process auditory scenes composed of everyday sounds (e.g., human voices, animal calls, environmental sounds, and musical instruments). Results indicated that although change deafness was present and robust at all ages, listeners improved at detecting changes with age. All listeners were less sensitive to changes within the same semantic category than to small acoustic changes, suggesting that, regardless of age, listeners relied heavily on semantic category knowledge to detect changes. Furthermore, all listeners showed less change deafness when they correctly encoded change-relevant objects (i.e., when they remembered hearing the changing object during the task). Finally, we found that all listeners were better at encoding human voices and were more sensitive to detecting changes involving the human voice. Despite poorer overall performance compared with adults, children detect changes in complex auditory scenes much like adults, using high-level knowledge about auditory objects to guide processing, with special attention to the human voice.
- Published
- 2016
- Full Text
- View/download PDF
10. Metrical Categories in Infancy and Adulthood
- Author
-
Hannon, Erin E. and Trehub, Sandra E.
- Published
- 2003
11. Cues to Perceiving Tonal Stability in Music : The Role of Temporal Structure
- Author
-
Rosenthal, Matthew A. and Hannon, Erin E.
- Published
- 2016
12. Developmental changes in the categorization of speech and song.
- Author
-
Vanden Bosch der Nederlanden, Christina M., Qi, Xin, Sequeira, Sarah, Seth, Prakhar, Grahn, Jessica A., Joanisse, Marc F., and Hannon, Erin E.
- Subjects
SPEECH ,SONGS - Abstract
Music and language are two fundamental forms of human communication. Many studies examine the development of music‐ and language‐specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language—and especially speech and song— is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4‐ to 17‐year‐olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4‐ to 7‐year‐olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4–8‐year‐olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech‐like, 4‐ and 6‐year‐olds rated ambiguous speech–song stimuli as more song‐like than 8‐year‐olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4‐ and 6‐year‐olds' song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song. Research Highlights: Children and adults conceptually and perceptually categorize speech and song from age 4.Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song.Acoustic cue weighting changes with age, becoming adult‐like at age 8 for perceptual categorization and at age 12 for conceptual differentiation.Young children are still learning to categorize speech and song, which leaves open the possibility that music‐ and language‐specific skills are not so domain‐specific. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Finding the music of speech: Musical knowledge influences pitch processing in speech
- Author
-
Vanden Bosch der Nederlanden, Christina M., Hannon, Erin E., and Snyder, Joel S.
- Published
- 2015
- Full Text
- View/download PDF
14. How musical are music video game players?
- Author
-
Pasinski, Amanda C., Hannon, Erin E., and Snyder, Joel S.
- Published
- 2016
- Full Text
- View/download PDF
15. Familiarity Overrides Complexity in Rhythm Perception: A Cross-Cultural Comparison of American and Turkish Listeners
- Author
-
Hannon, Erin E., Soley, Gaye, and Ullal, Sangeeta
- Abstract
Despite the ubiquity of dancing and synchronized movement to music, relatively few studies have examined cognitive representations of musical rhythm and meter among listeners from contrasting cultures. We aimed to disentangle the contributions of culture-general and culture-specific influences by examining American and Turkish listeners' detection of temporal disruptions (varying in size from 50-250 ms in duration) to three types of stimuli: simple rhythms found in both American and Turkish music, complex rhythms found only in Turkish music, and highly complex rhythms that are rare in all cultures. Americans were most accurate when detecting disruptions to the simple rhythm. However, they performed less accurately but comparably in both the complex and highly complex conditions. By contrast, Turkish participants performed accurately and indistinguishably in both simple and complex conditions. However, they performed less accurately in the unfamiliar, highly complex condition. Together, these experiments implicate a crucial role of culture-specific listening experience and acquired musical knowledge in rhythmic pattern perception. (Contains 4 figures and 1 footnote.)
- Published
- 2012
- Full Text
- View/download PDF
16. Constraints on Infants' Musical Rhythm Perception: Effects of Interval Ratio Complexity and Enculturation
- Author
-
Hannon, Erin E., Soley, Gaye, and Levine, Rachel S.
- Abstract
Effects of culture-specific experience on musical rhythm perception are evident by 12 months of age, but the role of culture-general rhythm processing constraints during early infancy has not been explored. Using a habituation procedure with 5- and 7-month-old infants, we investigated effects of temporal interval ratio complexity on discrimination of standard from novel musical patterns containing 200-ms disruptions. Infants were tested in three ratio conditions: simple (2:1), which is typical in Western music, complex (3:2), which is typical in other musical cultures, and highly complex (7:4), which is relatively rare in music throughout the world. Unlike adults and older infants, whose accuracy was predicted by familiarity, younger infants were influenced by ratio complexity, as shown by their successful discrimination in the simple and complex conditions but not in the highly complex condition. The findings suggest that ratio complexity constrains rhythm perception even prior to the acquisition of culture-specific biases.
- Published
- 2011
- Full Text
- View/download PDF
17. Infant-Directed Speech Drives Social Preferences in 5-Month-Old Infants
- Author
-
Schachner, Adena and Hannon, Erin E.
- Abstract
Adults across cultures speak to infants in a specific infant-directed manner. We asked whether infants use this manner of speech (infant- or adult-directed) to guide their subsequent visual preferences for social partners. We found that 5-month-old infants encode an individuals' use of infant-directed speech and adult-directed speech, and use this information to guide their subsequent visual preferences for individuals even after the speech behavior has ended. Use of infant-directed speech may act as an effective cue for infants to select appropriate social partners, allowing infants to focus their attention on individuals who will provide optimal care and opportunity for learning. This selectivity may play a crucial role in establishing the foundations of social cognition. (Contains 3 figures.)
- Published
- 2011
- Full Text
- View/download PDF
18. Infants Prefer the Musical Meter of Their Own Culture: A Cross-Cultural Comparison
- Author
-
Soley, Gaye and Hannon, Erin E.
- Abstract
Infants prefer native structures such as familiar faces and languages. Music is a universal human activity containing structures that vary cross-culturally. For example, Western music has temporally regular metric structures, whereas music of the Balkans (e.g., Bulgaria, Macedonia, Turkey) can have both regular and irregular structures. We presented 4- to 8-month-old American and Turkish infants with contrasting melodies to determine whether cultural background would influence their preferences for musical meter. In Experiment 1, American infants preferred Western over Balkan meter, whereas Turkish infants, who were familiar with both Western and Balkan meters, exhibited no preference. Experiments 2 and 3 presented infants with either a Western or Balkan meter paired with an arbitrary rhythm with complex ratios not common to any musical culture. Both Turkish and American infants preferred Western and Balkan meter to an arbitrary meter. Infants' musical preferences appear to be driven by culture-specific experience and a culture-general preference for simplicity. (Contains 2 figures and 1 footnote.)
- Published
- 2010
- Full Text
- View/download PDF
19. Adaptation Reveals Multiple Levels of Representation in Auditory Stream Segregation
- Author
-
Snyder, Joel S., Carter, Olivia L., and Hannon, Erin E.
- Abstract
When presented with alternating low and high tones, listeners are more likely to perceive 2 separate streams of tones ("streaming") than a single coherent stream when the frequency separation ([delta]f) between tones is greater and the number of tone presentations is greater ("buildup"). However, the same large-[delta]f sequence reduces streaming for subsequent patterns presented after a gap of up to several seconds. Buildup occurs at a level of neural representation with sharp frequency tuning. The authors used adaptation to demonstrate that the contextual effect of prior [delta]f arose from a representation with broad frequency tuning, unlike buildup. Separate adaptation did not occur in a representation of [delta]f independent of frequency range, suggesting that any frequency-shift detectors undergoing adaptation are also frequency specific. A separate effect of prior perception was observed, dissociating stimulus-related (i.e., [delta]f) and perception-related (i.e., 1 stream vs. 2 streams) adaptation. Viewing a visual analogue to auditory streaming had no effect on subsequent perception of streaming, suggesting adaptation in auditory-specific brain circuits. These results, along with previous findings on buildup, suggest that processing in at least 3 levels of auditory neural representation underlies segregation and formation of auditory streams. (Contains 1 footnote, 2 tables, and 7 figures.)
- Published
- 2009
- Full Text
- View/download PDF
20. The Role of Melodic and Temporal Cues in Perceiving Musical Meter
- Author
-
Hannon, Erin E., Snyder, Joel S., and Eerola, Tuomas
- Abstract
A number of different cues allow listeners to perceive musical meter. Three experiments examined effects of melodic and temporal accents on perceived meter in excerpts from folk songs scored in 6/8 or 3/4 meter. Participants matched excerpts with 1 of 2 metrical drum accompaniments. Melodic accents included contour change, melodic leaps, registral extreme, melodic repetition, and harmonic rhythm. Two experiments with isochronous melodies showed that contour change and melodic repetition predicted judgments. For longer melodies in the 2nd experiment, variables predicted judgments best at the beginning of excerpts. The final experiment, with rhythmically varied melodies, showed that temporal accents, tempo, and contour change were the strongest predictors of meter. The authors' findings suggest that listeners combine multiple melodic and temporal features to perceive musical meter.
- Published
- 2004
21. Metrical Categories in Infancy and Adulthood
- Author
-
Hannon, Erin E. and Trehub, Sandra E.
- Published
- 2005
22. Everyday Musical Experience Is Sufficient to Perceive the Speech-to-Song Illusion
- Author
-
Vanden Bosch der Nederlanden, Christina M., Hannon, Erin E., and Snyder, Joel S.
- Published
- 2015
- Full Text
- View/download PDF
23. Conventional rhythms enhance infants' and adults' perception of musical patterns
- Author
-
Trehub, Sandra E. and Hannon, Erin E.
- Published
- 2009
- Full Text
- View/download PDF
24. Elements of musical and dance sophistication predict musical groove perception.
- Author
-
O'Connell, Samantha R., Nave-Blodgett, Jessica E., Wilson, Grace E., Hannon, Erin E., and Snyder, Joel S.
- Subjects
MUSICAL perception ,DANCE ,BALLROOM dancing ,FORM perception ,MUSICALS - Abstract
Listening to groovy music is an enjoyable experience and a common human behavior in some cultures. Specifically, many listeners agree that songs they find to be more familiar and pleasurable are more likely to induce the experience of musical groove. While the pleasurable and dance-inducing effects of musical groove are omnipresent, we know less about how subjective feelings toward music, individual musical or dance experiences, or more objective musical perception abilities are correlated with the way we experience groove. Therefore, the present study aimed to evaluate how musical and dance sophistication relates to musical groove perception. Onehundred 24 participants completed an online study during which they rated 20 songs, considered high- or low-groove, and completed the Goldsmiths Musical Sophistication Index, the Goldsmiths Dance Sophistication Index, the Beat and Meter Sensitivity Task, and a modified short version of the Profile for Music Perception Skills. Our results reveal that measures of perceptual abilities, musical training, and social dancing predicted the difference in groove rating between high- and low-groove music. Overall, these findings support the notion that listeners' individual experiences and predispositions may shape their perception of musical groove, although other causal directions are also possible. This research helps elucidate the correlates and possible causes of musical groove perception in a wide range of listeners. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Auditory affective processing, musicality, and the development of misophonic reactions.
- Author
-
Mednicoff, Solena D., Barashy, Sivan, Gonzales, Destiny, Benning, Stephen D., Snyder, Joel S., and Hannon, Erin E.
- Subjects
HYPERACUSIS ,AUDITORY perception ,TINNITUS ,AUTISM spectrum disorders ,WILLIAMS syndrome - Abstract
Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Effects of context on auditory stream segregation
- Author
-
Snyder, Joel S., Carter, Olivia L., Lee, Suh-Kyung, Hannon, Erin E., and Alain, Claude
- Subjects
Auditory perception -- Research ,Context effects (Psychology) -- Research ,Sensory memory -- Research ,Psychology and mental health - Abstract
The authors examined the effect of preceding context on auditory stream segregation. Low tones (A), high tones (B), and silences (-) were presented in an ABA--pattern. Participants indicated whether they perceived 1 or 2 streams of tones. The A tone frequency was fixed, and the B tone was the same as the A tone or had I of 3 higher frequencies. Perception of 2 streams in the current trial increased with greater frequency separation between the A and B tones ([DELTA]f). Larger [DELTA]f in previous trials modified this pattern, causing less streaming in the current trial. This occurred even when listeners were asked to bias their perception toward hearing 1 stream or 2 streams. The effect of previous [DELTA]f was not due to response bias because simply perceiving 2 streams in the previous trial did not cause less streaming in the current trial. Finally, the effect of previous [DELTA]f was diminished, though still present, when the silent duration between trials was increased to 5.76 s. The time course of this context effect on streaming implicates the involvement of auditory sensory memory or neural adaptation. Keywords: auditory scene analysis, auditory sensory memory, neural adaptation
- Published
- 2008
27. Music acquisition: effects of enculturation and formal training on development
- Author
-
Hannon, Erin E. and Trainor, Laurel J.
- Published
- 2007
- Full Text
- View/download PDF
28. Infant music perception: Domain-general or domain-specific mechanisms?
- Author
-
Trehub, Sandra E. and Hannon, Erin E.
- Published
- 2006
- Full Text
- View/download PDF
29. Effects of perceptual experience on childrenʼs and adults’ perception of unfamiliar rhythms
- Author
-
Hannon, Erin E., Vanden Bosch der Nederlanden, Christina M., and Tichko, Parker
- Published
- 2012
- Full Text
- View/download PDF
30. Infants use meter to categorize rhythms and melodies: Implications for musical structure learning
- Author
-
Hannon, Erin E. and Johnson, Scott P.
- Published
- 2005
- Full Text
- View/download PDF
31. Steady state‐evoked potentials of subjective beat perception in musical rhythms.
- Author
-
Nave, Karli M., Hannon, Erin E., and Snyder, Joel S.
- Subjects
- *
MUSICAL perception , *MUSICAL meter & rhythm , *ELECTROENCEPHALOGRAPHY , *STIMULUS & response (Psychology) - Abstract
Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research has suggested that listener's conscious perception of the musical structure (e.g., beat and meter) might be reflected in neural responses that follow the frequency of the beat. However, the extent to which these neural responses directly reflect concurrent, listener‐reported perception of musical beat versus stimulus‐driven activity is understudied. We investigated whether steady state‐evoked potentials (SSEPs), measured using electroencephalography (EEG), reflect conscious perception of beat by holding the stimulus constant while contextually manipulating listeners' perception and measuring perceptual responses on every trial. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context phase), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies that matched the beat of the preceding context. Exploratory analyses investigated whether EEG amplitude at the beat‐related SSEPs (steady state‐evoked potentials) predicted performance on the beat induction task on a single‐trial basis, but were inconclusive. Our findings substantiate the claim that auditory SSEPs reflect conscious perception of musical beat and not just stimulus features. Although prior research suggests that steady state‐evoked potentials (SSEPs) reflect musical beat perception, our study is one of the first to use real music to induce musical beat and to collect a listener‐reported measure of perception concurrently with SSEPs. While recent rodent findings suggest that lower‐level propagation of stimulus differences may fully account for the relation between SSEPs and musical beat, we provide counterevidence by showing the effect after eliminating stimulus differences between conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. The Developmental Origins of the Perception and Production of Musical Rhythm.
- Author
-
Hannon, Erin E., Nave‐Blodgett, Jessica E., and Nave, Karli M.
- Subjects
- *
MUSICAL meter & rhythm , *MUSIC education , *MUSIC & children , *COGNITIVE ability , *BODY movement - Abstract
Abstract: In recent years, interest has grown in potential links between abilities in musical rhythm and the development of language and reading, as well as in using music lessons as an intervention or diagnostic tool for individuals at risk for language and reading delays. Nevertheless, the development of abilities in musical rhythm is a relatively new area of study. In this article, we review knowledge about the development of musical rhythm, highlighting key musical structures of rhythm, beat, and meter, and suggesting areas of inquiry. Further research is needed to understand how children acquire the perceptual and cognitive underpinnings of universal musical behaviors such as dancing, clapping, and singing in time with music. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building.
- Author
-
Frank, Michael C., Bergelson, Elika, Bergmann, Christina, Cristia, Alejandrina, Floccia, Caroline, Gervain, Judit, Hamlin, J. Kiley, Hannon, Erin E., Kline, Melissa, Levelt, Claartje, Lew‐Williams, Casey, Nazzi, Thierry, Panneton, Robin, Rabagliati, Hugh, Soderstrom, Melanie, Sullivan, Jessica, Waxman, Sandra, and Yurovsky, Daniel
- Subjects
CHILD psychology ,COGNITION ,INFANT development ,INTERPROFESSIONAL relations ,RESEARCH ,RESEARCH evaluation ,SOCIAL psychology ,THEORY - Abstract
The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research-especially with infant participants-also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
34. Perceiving speech rhythm in music: Listeners classify instrumental songs according to language of origin
- Author
-
Hannon, Erin E.
- Published
- 2009
- Full Text
- View/download PDF
35. Exaggeration of Language-Specific Rhythms in English and French Children's Songs.
- Author
-
Hannon, Erin E., Lévêque, Yohana, Nave, Karli M., Trehub, Sandra E., François, Clément, and Slevc, L. Robert
- Subjects
MUSICAL meter & rhythm ,CHILDREN'S songs ,MUSIC & language ,MUSICAL notation ,ORAL communication - Abstract
The available evidence indicates that the music of a culture reflects the speech rhythm of the prevailing language. The normalized pairwise variability index (nPVI) is a measure of durational contrast between successive events that can be applied to vowels in speech and to notes in music. Music-language parallels may have implications for the acquisition of language and music, but it is unclear whether native-language rhythms are reflected in children's songs. In general, children's songs exhibit greater rhythmic regularity than adults' songs, in line with their caregiving goals and frequent coordination with rhythmic movement. Accordingly, one might expect lower nPVI values (i.e., lower variability) for such songs regardless of culture. In addition to their caregiving goals, children's songs may serve an intuitive didactic function by modeling culturally relevant content and structure for music and language. One might therefore expect pronounced rhythmic parallels between children's songs and language of origin. To evaluate these predictions, we analyzed a corpus of 269 English and French songs from folk and children's music anthologies. As in prior work, nPVI values were significantly higher for English than for French children's songs. For folk songs (i.e., songs not for children), the difference in nPVI for English and French songs was small and in the expected direction but non-significant. We subsequently collected ratings from American and French monolingual and bilingual adults, who rated their familiarity with each song, how much they liked it, and whether or not they thought it was a children's song. Listeners gave higher familiarity and liking ratings to songs from their own culture, and they gave higher familiarity and preference ratings to children's songs than to other songs. Although higher child-directedness ratings were given to children's than to folk songs, French listeners drove this effect, and their ratings were uniquely predicted by nPVI. Together, these findings suggest that language-based rhythmic structures are evident in children's songs, and that listeners expect exaggerated language-based rhythms in children's songs. The implications of these findings for enculturation processes and for the acquisition of music and language are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
36. Tapping to a Slow Tempo in the Presence of Simple and Complex Meters Reveals Experience-Specific Biases for Processing Music.
- Author
-
Ullal-Gupta, Sangeeta, Hannon, Erin E., and Snyder, Joel S.
- Subjects
- *
MUSIC , *RHYTHM , *ANALYSIS of variance , *COGNITION , *RELAXATION for health , *SENSORY perception - Abstract
Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
37. The genetic basis of music ability.
- Author
-
Yi Ting Tan, McPherson, Gary E., Peretz, Isabelle, Berkovic, Samuel F., Wilson, Sarah J., Münte, Thomas F., and Hannon, Erin E.
- Subjects
MUSIC ,CULTURAL property ,CIVIL society ,SENSORY perception ,ABILITY - Abstract
Music is an integral part of the cultural heritage of all known human societies, with the capacity for music perception and production present in most people. Researchers generally agree that both genetic and environmental factors contribute to the broader realization of music ability, with the degree of music aptitude varying, not only from individual to individual, but across various components of music ability within the same individual. While environmental factors influencing music development and expertise have been well investigated in the psychological and music literature, the interrogation of possible genetic influences has not progressed at the same rate. Recent advances in genetic research offer fertile ground for exploring the genetic basis of music ability. This paper begins with a brief overview of behavioral and molecular genetic approaches commonly used in human genetic analyses, and then critically reviews the key findings of genetic investigations of the components of music ability. Some promising and converging findings have emerged, with several loci on chromosome 4 implicated in singing and music perception, and certain loci on chromosome 8q implicated in absolute pitch and music perception. The gene AVPR1A on chromosome 12q has also been implicated in music perception, music memory, and music listening, whereas SLC6A4 on chromosome 17q has been associated with music memory and choir participation. Replication of these results in alternate populations and with larger samples is warranted to confirm the findings. Through increased research efforts, a clearer picture of the genetic mechanisms underpinning music ability will hopefully emerge. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
38. An evolutionary theory of music needs to care about developmental timing.
- Author
-
Hannon, Erin E., Crittenden, Alyssa N., Snyder, Joel S., and Nave, Karli M.
- Subjects
- *
MUSIC theory , *EVOLUTIONARY theories , *SOCIAL learning , *SOCIAL dynamics , *INFANTS - Abstract
Both target papers cite evidence from infancy and early childhood to support the notion of human musicality as a somewhat static suite of capacities; however, in our view they do not adequately acknowledge the critical role of developmental timing, the acquisition process, or the dynamics of social learning, especially during later periods of development such as middle childhood. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Linking prenatal experience to the emerging musical mind.
- Author
-
Ullal-Gupta, Sangeeta, Vanden Bosch der Nederlanden, Christina M., Tichko, Parker, Lahav, Amir, and Hannon, Erin E.
- Subjects
HUMAN Development Index ,FETAL behavior ,FETAL development ,AUDITORY adaptation ,MUSICAL meter & rhythm - Abstract
The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
40. Auditory superiority for perceiving the beat level but not measure level in music.
- Author
-
Nave-Blodgett, Jessica E., Snyder, Joel S., and Hannon, Erin E.
- Abstract
Auditory perception of time is superior to visual perception, both for simple intervals and beat-based musical rhythms. To what extent does this auditory advantage characterize perception of different hierarchical levels of musical meter, and how is it related to lifelong experience with music? We paired musical excerpts with auditory and visual metronomes that matched or mismatched the musical meter at the beat level (faster) and measure level (slower) and obtained fit ratings from adults and children (5-10 years). Adults exhibited an auditory advantage in this task for the beat level, but not for the measure level. Children also displayed an auditory advantage that increased with age for the beat level. In both modalities, their overall sensitivity to beat increased with age, but they were not sensitive to measure-level matching at any age. More musical training was related to enhanced sensitivity in both auditory and visual modalities for measure-level matching in adults and beat-level matching in children. These findings provide evidence for auditory superiority of beat perception across development, and they suggest that beat and meter perception develop quite gradually and rely on lifelong acquisition of musical knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Hierarchical beat perception develops throughout childhood and adolescence and is enhanced in those with musical training. Request Permissions.
- Author
-
Nave-Blodgett, Jessica E., Snyder, Joel S., and Hannon, Erin E.
- Abstract
Most music is temporally organized within a metrical hierarchy, having nested periodic patterns that give rise to the experience of stronger (downbeat) and weaker (upbeat) events. Musical meter presumably makes it possible to dance, sing, and play instruments in synchrony with others. It is nevertheless unclear whether or not listeners perceive multiple levels of periodicity simultaneously, and if they do, when and how they learn to do this. We tested children, adolescents, and musically trained and untrained adults with a new meter perception task. We presented excerpts of human-performed music paired with metronomes that matched or mismatched the metrical structure of the music at 2 hierarchical levels (beat and measure), and asked listeners to provide a rating of fit of metronome and music. Fit ratings suggested that adults with and without musical training were sensitive to both levels of meter simultaneously, but ratings were more strongly influenced by beat-level than by measure-level synchrony. Sensitivity to two simultaneous levels of meter was not evident in children or adolescents. Sensitivity to the beat alone was apparent in the youngest children and increased with age, whereas sensitivity to the measure alone was not present in younger children (5- to 8-year-olds). These findings suggest a prolonged period of development and refinement of hierarchical beat perception and surprisingly weak overall ability to attend to 2 beat levels at the same time across all ages. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Testing the relationship between preferences for infant-directed speech and vocabulary development: A multi-lab study.
- Author
-
Soderstrom M, Rocha-Hidalgo J, Muñoz LE, Bochynska A, Werker JF, Skarabela B, Seidl A, Ryjova Y, Rennels JL, Potter CE, Paulus M, Ota M, Olesen NM, Nave KM, Mayor J, Martin A, Machon LC, Lew-Williams C, Ko ES, Kim H, Kartushina N, Kammermeier M, Jessop A, Hay JF, Havron N, Hannon EE, Kiley Hamlin J, Gonzalez-Gomez N, Gampe A, Fritzsche T, Frank MC, Durrant S, Davies C, Cashon C, Byers-Heinlein K, Boyce V, Black AK, Bergmann C, Anderson L, Alshakhori MK, Al-Hoorie AH, and Tsui ASM
- Abstract
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants' preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.