108 results on '"Trehub SE"'
Search Results
2. Analyzing conversations between mothers and their hearing and deaf adolescents.
- Author
-
Nohara M, MacKay S, and Trehub SE
- Published
- 1995
3. Detection of pitch errors in well-known songs.
- Author
-
Weiss MW and Trehub SE
- Abstract
We examined pitch-error detection in well-known songs sung with or without meaningful lyrics. In Experiment 1, adults heard the initial phrase of familiar songs sung with lyrics or repeating syllables ( la ) and judged whether they heard an out-of-tune note. Half of the renditions had a single pitch error (50 or 100 cents); half were in tune. Listeners were poorer at pitch-error detection in songs with lyrics. In Experiment 2, within-note pitch fluctuations in the same performances were eliminated by auto-tuning. Again, pitch-error detection was worse for renditions with lyrics (50 cents), suggesting adverse effects of semantic processing. In Experiment 3, songs were sung with repeating syllables or scat syllables to ascertain the role of phonetic variability. Performance was poorer for scat than for repeating syllables, indicating adverse effects of phonetic variability, but overall performance exceeded Experiment 1. In Experiment 4, listeners evaluated songs in all styles (repeating syllables, scat, lyrics) within the same session. Performance was best with repeating syllables (50 cents) and did not differ between scat or lyric versions. In short, tracking the pitches of highly familiar songs was impaired by the presence of words, an impairment stemming primarily from phonetic variability rather than interference from semantic processing., (© The Author(s) 2022.)
- Published
- 2023
- Full Text
- View/download PDF
4. Acoustic regularities in infant-directed speech and song across cultures.
- Author
-
Hilton CB, Moser CJ, Bertolo M, Lee-Rubin H, Amir D, Bainbridge CM, Simson J, Knox D, Glowacki L, Alemu E, Galbarczyk A, Jasienska G, Ross CT, Neff MB, Martin A, Cirelli LK, Trehub SE, Song J, Kim M, Schachner A, Vardy TA, Atkinson QD, Salenius A, Andelin J, Antfolk J, Madhivanan P, Siddaiah A, Placek CD, Salali GD, Keestra S, Singh M, Collins SA, Patton JQ, Scaff C, Stieglitz J, Cutipa SC, Moya C, Sagar RR, Anyawire M, Mabulla A, Wood BM, Krasnow MM, and Mehr SA
- Subjects
- Humans, Adult, Infant, Speech, Language, Acoustics, Voice, Music
- Abstract
When interacting with infants, humans often alter their speech and song in ways thought to support communication. Theories of human child-rearing, informed by data on vocal signalling across species, predict that such alterations should appear globally. Here, we show acoustic differences between infant-directed and adult-directed vocalizations across cultures. We collected 1,615 recordings of infant- and adult-directed speech and song produced by 410 people in 21 urban, rural and small-scale societies. Infant-directedness was reliably classified from acoustic features only, with acoustic profiles of infant-directedness differing across language and music but in consistent fashions. We then studied listener sensitivity to these acoustic features. We played the recordings to 51,065 people from 187 countries, recruited via an English-language website, who guessed whether each vocalization was infant-directed. Their intuitions were more accurate than chance, predictable in part by common sets of acoustic features and robust to the effects of linguistic relatedness between vocalizer and listener. These findings inform hypotheses of the psychological functions and evolution of human communication., (© 2022. The Author(s), under exclusive licence to Springer Nature Limited.)
- Published
- 2022
- Full Text
- View/download PDF
5. Challenging infant-directed singing as a credible signal of maternal attention.
- Author
-
Trehub SE
- Subjects
- Arousal, Attention, Female, Humans, Infant, Mothers, Music, Singing
- Abstract
I challenge Mehr et al.'s contention that ancestral mothers were reluctant to provide all the attention demanded by their infants. The societies in which music emerged likely involved foraging mothers who engaged in extensive infant carrying, feeding, and soothing. Accordingly, their singing was multimodal, its rhythms aligned with maternal movements, with arousal regulatory consequences for singers and listeners.
- Published
- 2021
- Full Text
- View/download PDF
6. Enhanced Memory for Vocal Melodies in Autism Spectrum Disorder and Williams Syndrome.
- Author
-
Weiss MW, Sharda M, Lense M, Hyde KL, and Trehub SE
- Subjects
- Adolescent, Adult, Auditory Perception, Child, Humans, Autism Spectrum Disorder complications, Music, Voice, Williams Syndrome complications
- Abstract
Adults and children with typical development (TD) remember vocal melodies (without lyrics) better than instrumental melodies, which is attributed to the biological and social significance of human vocalizations. Here we asked whether children with autism spectrum disorder (ASD), who have persistent difficulties with communication and social interaction, and adolescents and adults with Williams syndrome (WS), who are highly sociable, even indiscriminately friendly, exhibit a memory advantage for vocal melodies like that observed in individuals with TD. We tested 26 children with ASD, 26 adolescents and adults with WS of similar mental age, and 26 children with TD on their memory for vocal and instrumental (piano, marimba) melodies. After exposing them to 12 unfamiliar folk melodies with different timbres, we required them to indicate whether each of 24 melodies (half heard previously) was old (heard before) or new (not heard before) during an unexpected recognition test. Although the groups successfully distinguished the old from the new melodies, they differed in overall memory. Nevertheless, they exhibited a comparable advantage for vocal melodies. In short, individuals with ASD and WS show enhanced processing of socially significant auditory signals in the context of music. LAY SUMMARY: Typically developing children and adults remember vocal melodies better than instrumental melodies. In this study, we found that children with Autistic Spectrum Disorder, who have severe social processing deficits, and children and adults with Williams syndrome, who are highly sociable, exhibit comparable memory advantages for vocal melodies. The results have implications for musical interventions with these populations., (© 2021 International Society for Autism Research, Wiley Periodicals LLC.)
- Published
- 2021
- Full Text
- View/download PDF
7. Effects of Maternal Singing Style on Mother-Infant Arousal and Behavior.
- Author
-
Cirelli LK, Jurewicz ZB, and Trehub SE
- Subjects
- Arousal, Emotions, Female, Humans, Infant, Play and Playthings, Mothers, Singing
- Abstract
Mothers around the world sing to infants, presumably to regulate their mood and arousal. Lullabies and playsongs differ stylistically and have distinctive goals. Mothers sing lullabies to soothe and calm infants and playsongs to engage and excite infants. In this study, mothers repeatedly sang Twinkle, Twinkle, Little Star to their infants ( n = 30 dyads), alternating between soothing and playful renditions. Infant attention and mother-infant arousal (i.e., skin conductivity) were recorded continuously. During soothing renditions, mother and infant arousal decreased below initial levels as the singing progressed. During playful renditions, maternal and infant arousal remained stable. Moreover, infants exhibited greater attention to mother during playful renditions than during soothing renditions. Mothers' playful renditions were faster, higher in pitch, louder, and characterized by greater pulse clarity than their soothing renditions. Mothers also produced more energetic rhythmic movements during their playful renditions. These findings highlight the contrastive nature and consequences of lullabies and playsongs.
- Published
- 2020
- Full Text
- View/download PDF
8. Familiar songs reduce infant distress.
- Author
-
Cirelli LK and Trehub SE
- Subjects
- Adult, Auditory Perception, Female, Humans, Infant, Infant Behavior psychology, Male, Attention physiology, Emotions, Music psychology, Parents psychology, Speech, Stress, Psychological psychology
- Abstract
Parents commonly vocalize to infants to mitigate their distress, especially when holding them is not possible. Here we examined the relative efficacy of parents' speech and singing (familiar and unfamiliar songs) in alleviating the distress of 8- and 10-month-old infants ( n = 68 per age group). Parent-infant dyads participated in 3 trials of the Still Face procedure, featuring a 2-min Play Phase, a Still Face phase (parents immobile and unresponsive for 1 min or until infants became visibly distressed), and a 2-min Reunion Phase in which caregivers attempted to reverse infant distress by (a) singing a highly familiar song, (b) singing an unfamiliar song, or (c) expressive talking (order counterbalanced across dyads). In the Reunion Phase, talking led to increased negative affect in both age groups, in contrast to singing familiar or unfamiliar songs, which increased infant attention to parent and decreased negative affect. The favorable consequences were greatest for familiar songs, which also generated increased smiling. Skin conductance recorded from a subset of infants ( n = 36 younger, 41 older infants) revealed that arousal levels were highest for the talking reunion, lowest for unfamiliar songs, and intermediate for familiar songs. The arousal effects, considered in conjunction with the behavioral effects, confirm that songs are more effective than speech at mitigating infant distress. We suggest, moreover, that familiar songs generate higher infant arousal than unfamiliar songs because they evoke excitement, reflected in modestly elevated arousal as well as pleasure, in contrast to more subdued responses to unfamiliar songs. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
- Published
- 2020
- Full Text
- View/download PDF
9. Development of consonance preferences in Western listeners.
- Author
-
Weiss MW, Cirelli LK, McDermott JH, and Trehub SE
- Subjects
- Acoustic Stimulation, Adult, Child, Female, Humans, Male, Auditory Perception physiology, Emotions physiology, Music psychology
- Abstract
Many scholars consider preferences for consonance, as defined by Western music theorists, to be based primarily on biological factors, while others emphasize experiential factors, notably the nature of musical exposure. Cross-cultural experiments suggest that consonance preferences are shaped by musical experience, implying that preferences should emerge or become stronger over development for individuals in Western cultures. However, little is known about this developmental trajectory. We measured preferences for the consonance of simultaneous sounds and related acoustic properties in children and adults to characterize their developmental course and dependence on musical experience. In Study 1, adults and children 6 to 10 years of age rated their liking of simultaneous tone combinations (dyads) and affective vocalizations. Preferences for consonance increased with age and were predicted by changing preferences for harmonicity-the degree to which a sound's frequencies are multiples of a common fundamental frequency-but not by evaluations of beating-fluctuations in amplitude that occur when frequencies are close but not identical, producing the sensation of acoustic roughness. In Study 2, musically trained adults and 10-year-old children also rated the same stimuli. Age and musical training were associated with enhanced preference for consonance. Both measures of experience were associated with an enhanced preference for harmonicity, but were unrelated to evaluations of beating stimuli. The findings are consistent with cross-cultural evidence and the effects of musicianship in Western adults in linking Western musical experience to preferences for consonance and harmonicity. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
- Published
- 2020
- Full Text
- View/download PDF
10. Dancing to Metallica and Dora: Case Study of a 19-Month-Old.
- Author
-
Cirelli LK and Trehub SE
- Abstract
Rhythmic movement to music, whether deliberate (e.g., dancing) or inadvertent (e.g., foot-tapping), is ubiquitous. Although parents commonly report that infants move rhythmically to music, especially to familiar music in familiar environments, there has been little systematic study of this behavior. As a preliminary exploration of infants' movement to music in their home environment, we studied V, an infant who began moving rhythmically to music at 6 months of age. Our primary goal was to generate testable hypotheses about movement to music in infancy. Across nine sessions, beginning when V was almost 19 months of age and ending 8 weeks later, she was video-recorded by her mother during the presentation of 60-s excerpts from two familiar and two unfamiliar songs presented at three tempos-the original song tempo as well as faster and slower versions. V exhibited a number of repeated dance movements such as head-bobbing, arm-pumping, torso twists, and bouncing. She danced most to Metallica's Now that We're Dead , a recording that her father played daily in V's presence, often dancing with her while it played. Its high pulse clarity, in conjunction with familiarity, may have increased V's propensity to dance, as reflected in lesser dancing to familiar music with low pulse clarity and to unfamiliar music with high pulse clarity. V moved faster to faster music but only for unfamiliar music, perhaps because arousal drove her movement to familiar music. Her movement to music was positively correlated with smiling, highlighting the pleasurable nature of the experience. Rhythmic movement to music may have enhanced her pleasure, and the joy of listening may have promoted her movement. On the basis of behavior observed in this case study, we propose a scaled-up study to obtain definitive evidence about the effects of song familiarity and specific musical features on infant rhythmic movement, the developmental trajectory of dance skills, and the typical range of variation in such skills.
- Published
- 2019
- Full Text
- View/download PDF
11. Rhythm and melody as social signals for infants.
- Author
-
Cirelli LK, Trehub SE, and Trainor LJ
- Abstract
Infants typically experience music through social interactions with others. One such experience involves caregivers singing to infants while holding and bouncing them rhythmically. These highly social interactions shape infant music perception and may also influence social cognition and behavior. Moving in time with others-interpersonal synchrony-can direct infants' social preferences and prosocial behavior. Infants also show social preferences and selective prosociality toward singers of familiar, socially learned melodies. Here, we discuss recent studies of the influence of musical engagement on infant social cognition and behavior, highlighting the importance of rhythmic movement and socially relevant melodies., (© 2018 New York Academy of Sciences.)
- Published
- 2018
- Full Text
- View/download PDF
12. Precursors to the performing arts in infancy and early childhood.
- Author
-
Trehub SE and Cirelli LK
- Subjects
- Affect, Child Behavior, Child, Preschool, Female, Humans, Infant, Male, Auditory Perception, Child Development physiology, Dancing, Music, Psychomotor Performance physiology
- Abstract
Across cultures, aspects of music and dance contribute to everyday life in a variety of ways that do not depend on artistry, aesthetics, or expertise. In this chapter, we focus on precursors to music and dance that are evident in infancy: the underlying perceptual abilities, parent-infant musical interactions that are motivated by nonmusical goals, the consequences of such interactions for mood regulation and social regulation, and the emergence of rudimentary singing and rhythmic movement to music. These precursors to music and dance lay the groundwork for our informal engagement with music throughout life and its continuing effects on mood regulation, affiliation, and well-being., (© 2018 Elsevier B.V. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
13. Children's and adults' perception of questions and statements from terminal fundamental frequency contours.
- Author
-
Saindon MR, Cirelli LK, Schellenberg EG, van Lieshout P, and Trehub SE
- Subjects
- Acoustic Stimulation, Adult, Age Factors, Audiometry, Speech, Child, Child Behavior, Child Development, Female, Humans, Male, Recognition, Psychology, Young Adult, Cues, Discrimination, Psychological, Pitch Discrimination, Speech Acoustics, Speech Perception, Voice Quality
- Abstract
The present study compared children's and adults' identification and discrimination of declarative questions and statements on the basis of terminal cues alone. Children (8-11 years, n = 41) and adults (n = 21) judged utterances as statements or questions from sentences with natural statement and question endings and with manipulated endings that featured intermediate fundamental frequency (F
0 ) values. The same adults and a different sample of children (n = 22) were also tested on their discrimination of the utterances. Children's judgments shifted more gradually across categories than those of adults, but their category boundaries were comparable. In the discrimination task, adults found cross-boundary comparisons more salient than within-boundary comparisons. Adults' performance on the identification and discrimination tasks is consistent with but not definitive regarding categorical perception of statements and questions. Children, by contrast, discriminated the cross-boundary comparisons no better than other comparisons. The findings indicate age-related sharpening in the perception of statements and questions based on terminal F0 cues and the gradual emergence of distinct perceptual categories.- Published
- 2017
- Full Text
- View/download PDF
14. Imitation of Non-Speech Oral Gestures by 8-Month-Old Infants.
- Author
-
Diepstra H, Trehub SE, Eriks-Brophy A, and van Lieshout PH
- Subjects
- Acoustic Stimulation, Adult, Female, Humans, Infant, Male, Photic Stimulation, Tongue Habits, Child Language, Gestures, Imitative Behavior, Infant Behavior, Lip physiology, Tongue physiology
- Abstract
This study investigates the oral gestures of 8-month-old infants in response to audiovisual presentation of lip and tongue smacks. Infants exhibited more lip gestures than tongue gestures following adult lip smacks and more tongue gestures than lip gestures following adult tongue smacks. The findings, which are consistent with predictions from Articulatory Phonology, imply that 8-month-old infants are capable of producing goal-directed oral gestures by matching the articulatory organ of an adult model.
- Published
- 2017
- Full Text
- View/download PDF
15. Children's identification of questions from rising terminal pitch.
- Author
-
Saindon MR, Trehub SE, Schellenberg EG, and VAN Lieshout P
- Subjects
- Adult, Attention, Child, Child, Preschool, Female, Humans, Male, Sound Spectrography, Cues, Language Development, Linguistics, Semantics, Speech Acoustics, Speech Perception
- Abstract
Young children are slow to master conventional intonation patterns in their yes/no questions, which may stem from imperfect understanding of the links between terminal pitch contours and pragmatic intentions. In Experiment 1, five- to ten-year-old children and adults were required to judge utterances as questions or statements on the basis of intonation alone. Children eight years of age or younger performed above chance levels but less accurately than adult listeners. To ascertain whether the verbal content of utterances interfered with young children's attention to the relevant acoustic cues, low-pass filtered versions of the same utterances were presented to children and adults in Experiment 2. Low-pass filtering reduced performance comparably for all age groups, perhaps because such filtering reduced the salience of critical pitch cues. Young children's difficulty in differentiating declarative questions from statements is not attributable to basic perceptual difficulties but rather to absent or unstable intonation categories.
- Published
- 2016
- Full Text
- View/download PDF
16. Pupils dilate for vocal or familiar music.
- Author
-
Weiss MW, Trehub SE, Schellenberg EG, and Habashi P
- Subjects
- Adult, Female, Humans, Male, Young Adult, Auditory Perception physiology, Music, Pupil physiology, Recognition, Psychology physiology, Voice
- Abstract
Previous research reveals that vocal melodies are remembered better than instrumental renditions. Here we explored the possibility that the voice, as a highly salient stimulus, elicits greater arousal than nonvocal stimuli, resulting in greater pupil dilation for vocal than for instrumental melodies. We also explored the possibility that pupil dilation indexes memory for melodies. We tracked pupil dilation during a single exposure to 24 unfamiliar folk melodies (half sung to la la, half piano) and during a subsequent recognition test in which the previously heard melodies were intermixed with 24 novel melodies (half sung, half piano) from the same corpus. Pupil dilation was greater for vocal melodies than for piano melodies in the exposure phase and in the test phase. It was also greater for previously heard melodies than for novel melodies. Our findings provide the first evidence that pupillometry can be used to measure recognition of stimuli that unfold over several seconds. They also provide the first evidence of enhanced arousal to vocal melodies during encoding and retrieval, thereby supporting the more general notion of the voice as a privileged signal. (PsycINFO Database Record, ((c) 2016 APA, all rights reserved).)
- Published
- 2016
- Full Text
- View/download PDF
17. Exaggeration of Language-Specific Rhythms in English and French Children's Songs.
- Author
-
Hannon EE, Lévêque Y, Nave KM, and Trehub SE
- Abstract
The available evidence indicates that the music of a culture reflects the speech rhythm of the prevailing language. The normalized pairwise variability index (nPVI) is a measure of durational contrast between successive events that can be applied to vowels in speech and to notes in music. Music-language parallels may have implications for the acquisition of language and music, but it is unclear whether native-language rhythms are reflected in children's songs. In general, children's songs exhibit greater rhythmic regularity than adults' songs, in line with their caregiving goals and frequent coordination with rhythmic movement. Accordingly, one might expect lower nPVI values (i.e., lower variability) for such songs regardless of culture. In addition to their caregiving goals, children's songs may serve an intuitive didactic function by modeling culturally relevant content and structure for music and language. One might therefore expect pronounced rhythmic parallels between children's songs and language of origin. To evaluate these predictions, we analyzed a corpus of 269 English and French songs from folk and children's music anthologies. As in prior work, nPVI values were significantly higher for English than for French children's songs. For folk songs (i.e., songs not for children), the difference in nPVI for English and French songs was small and in the expected direction but non-significant. We subsequently collected ratings from American and French monolingual and bilingual adults, who rated their familiarity with each song, how much they liked it, and whether or not they thought it was a children's song. Listeners gave higher familiarity and liking ratings to songs from their own culture, and they gave higher familiarity and preference ratings to children's songs than to other songs. Although higher child-directedness ratings were given to children's than to folk songs, French listeners drove this effect, and their ratings were uniquely predicted by nPVI. Together, these findings suggest that language-based rhythmic structures are evident in children's songs, and that listeners expect exaggerated language-based rhythms in children's songs. The implications of these findings for enculturation processes and for the acquisition of music and language are discussed.
- Published
- 2016
- Full Text
- View/download PDF
18. Cross-cultural convergence of musical features.
- Author
-
Trehub SE
- Published
- 2015
- Full Text
- View/download PDF
19. Cross-cultural perspectives on music and musicality.
- Author
-
Trehub SE, Becker J, and Morley I
- Subjects
- Human Activities, Humans, Social Behavior, Cross-Cultural Comparison, Music
- Abstract
Musical behaviours are universal across human populations and, at the same time, highly diverse in their structures, roles and cultural interpretations. Although laboratory studies of isolated listeners and music-makers have yielded important insights into sensorimotor and cognitive skills and their neural underpinnings, they have revealed little about the broader significance of music for individuals, peer groups and communities. This review presents a sampling of musical forms and coordinated musical activity across cultures, with the aim of highlighting key similarities and differences. The focus is on scholarly and everyday ideas about music--what it is and where it originates--as well the antiquity of music and the contribution of musical behaviour to ritual activity, social organization, caregiving and group cohesion. Synchronous arousal, action synchrony and imitative behaviours are among the means by which music facilitates social bonding. The commonalities and differences in musical forms and functions across cultures suggest new directions for ethnomusicology, music cognition and neuroscience, and a pivot away from the predominant scientific focus on instrumental music in the Western European tradition., (© 2015 The Author(s) Published by the Royal Society. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
20. Without it no music: cognition, biology and evolution of musicality.
- Author
-
Honing H, ten Cate C, Peretz I, and Trehub SE
- Subjects
- Adaptation, Physiological, Culture, Humans, Biological Evolution, Cognition physiology, Music
- Abstract
Musicality can be defined as a natural, spontaneously developing trait based on and constrained by biology and cognition. Music, by contrast, can be defined as a social and cultural construct based on that very musicality. One critical challenge is to delineate the constituent elements of musicality. What biological and cognitive mechanisms are essential for perceiving, appreciating and making music? Progress in understanding the evolution of music cognition depends upon adequate characterization of the constituent mechanisms of musicality and the extent to which they are present in non-human species. We argue for the importance of identifying these mechanisms and delineating their functions and developmental course, as well as suggesting effective means of studying them in human and non-human animals. It is virtually impossible to underpin the evolutionary role of musicality as a whole, but a multicomponent perspective on musicality that emphasizes its constituent capacities, development and neural cognitive specificity is an excellent starting point for a research programme aimed at illuminating the origins and evolution of musical behaviour as an autonomous trait., (© 2015 The Author(s) Published by the Royal Society. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
21. Enhanced processing of vocal melodies in childhood.
- Author
-
Weiss MW, Schellenberg EG, Trehub SE, and Dawber EJ
- Subjects
- Acoustic Stimulation methods, Child, Child Development, Child, Preschool, Female, Humans, Male, Recognition, Psychology, Auditory Perception physiology, Music psychology, Voice
- Abstract
Music cognition is typically studied with instrumental stimuli. Adults remember melodies better, however, when they are presented in a biologically significant timbre (i.e., the human voice) than in various instrumental timbres (Weiss, Trehub, & Schellenberg, 2012). We examined the impact of vocal timbre on children's processing of melodies. In Study 1, 9- to 11-year-olds listened to 16 unfamiliar folk melodies (4 each of voice, piano, banjo, or marimba). They subsequently listened to the same melodies and 16 timbre-matched foils, and judged whether each melody was old or new. Vocal melodies were recognized better than instrumental melodies, which did not differ from one another, and the vocal advantage was consistent across age. In Study 2, 5- to 6-year-olds and 7- to 8-year-olds were tested with a simplified design that included only vocal and piano melodies. Both age groups successfully differentiated old from new melodies, but memory was more accurate for the older group. The older children recognized vocal melodies better than piano melodies, whereas the younger children tended to label vocal melodies as old whether they were old or new. The results provide the first evidence of differential processing of vocal and instrumental melodies in childhood., ((PsycINFO Database Record (c) 2015 APA, all rights reserved).)
- Published
- 2015
- Full Text
- View/download PDF
22. Musical affect regulation in infancy.
- Author
-
Trehub SE, Ghazban N, and Corbeil M
- Subjects
- Adult, Attention, Auditory Perception, Cultural Characteristics, Humans, Infant, Infant Behavior, Mothers, Play and Playthings, Speech, Stress, Psychological, Affect, Mother-Child Relations, Music
- Abstract
Adolescents and adults commonly use music for various forms of affect regulation, including relaxation, revitalization, distraction, and elicitation of pleasant memories. Mothers throughout the world also sing to their infants, with affect regulation as the principal goal. To date, the study of maternal singing has focused largely on its acoustic features and its consequences for infant attention. We describe recent laboratory research that explores the consequences of singing for infant affect regulation. Such work reveals that listening to recordings of play songs can maintain 6- to 9-month-old infants in a relatively contented or neutral state considerably longer than recordings of infant-directed or adult-directed speech. When 10-month-old infants fuss or cry and are highly aroused, mothers' multimodal singing is more effective than maternal speech at inducing recovery from such distress. Moreover, play songs are more effective than lullabies at reducing arousal in Western infants. We explore the implications of these findings along with possible practical applications., (© 2014 New York Academy of Sciences.)
- Published
- 2015
- Full Text
- View/download PDF
23. Pianists exhibit enhanced memory for vocal melodies but not piano melodies.
- Author
-
Weiss MW, Vanzella P, Schellenberg EG, and Trehub SE
- Subjects
- Acoustic Stimulation, Adult, Analysis of Variance, Female, Humans, Male, Psychoacoustics, Recognition, Psychology physiology, Young Adult, Auditory Perception physiology, Memory physiology, Music psychology, Paint, Voice
- Abstract
Nonmusicians remember vocal melodies (i.e., sung to la la) better than instrumental melodies. If greater exposure to the voice contributes to those effects, then long-term experience with instrumental timbres should elicit instrument-specific advantages. Here we evaluate this hypothesis by comparing pianists with other musicians and nonmusicians. We also evaluate the possibility that absolute pitch (AP), which involves exceptional memory for isolated pitches, influences melodic memory. Participants heard 24 melodies played in four timbres (voice, piano, banjo, marimba) and were subsequently required to distinguish the melodies heard previously from 24 novel melodies presented in the same timbres. Musicians performed better than nonmusicians, but both groups showed a comparable memory advantage for vocal melodies. Moreover, pianists performed no better on melodies played on piano than on other instruments, and AP musicians performed no differently than non-AP musicians. The findings confirm the robust nature of the voice advantage and rule out explanations based on familiarity, practice, and motor representations.
- Published
- 2015
- Full Text
- View/download PDF
24. Children's identification of familiar songs from pitch and timing cues.
- Author
-
Volkova A, Trehub SE, Schellenberg EG, Papsin BC, and Gordon KA
- Abstract
The goal of the present study was to ascertain whether children with normal hearing and prelingually deaf children with cochlear implants could use pitch or timing cues alone or in combination to identify familiar songs. Children 4-7 years of age were required to identify the theme songs of familiar TV shows in a simple task with excerpts that preserved (1) the relative pitch and timing cues of the melody but not the original instrumentation, (2) the timing cues only (rhythm, meter, and tempo), and (3) the relative pitch cues only (pitch contour and intervals). Children with normal hearing performed at high levels and comparably across the three conditions. The performance of child implant users was well above chance levels when both pitch and timing cues were available, marginally above chance with timing cues only, and at chance with pitch cues only. This is the first demonstration that children can identify familiar songs from monotonic versions-timing cues but no pitch cues-and from isochronous versions-pitch cues but no timing cues. The study also indicates that, in the context of a very simple task, young implant users readily identify songs from melodic versions that preserve pitch and timing cues.
- Published
- 2014
- Full Text
- View/download PDF
25. Revisiting the innate preference for consonance.
- Author
-
Plantinga J and Trehub SE
- Subjects
- Female, Humans, Infant, Infant Behavior psychology, Male, Auditory Perception physiology, Choice Behavior physiology, Music psychology, Recognition, Psychology physiology
- Abstract
The origin of the Western preference for consonance remains unresolved, with some suggesting that the preference is innate. In Experiments 1 and 2 of the present study, 6-month-old infants heard six different consonant/dissonant pairs of stimuli, including those tested in previous research. In contrast to the findings of others, infants in the present study failed to listen longer to consonant stimuli. After 3 minutes of exposure to consonant or dissonant stimuli in Experiment 3, 6-month-old infants listened longer to the familiar stimulus, whether consonant or dissonant. Our findings are inconsistent with innate preferences for consonant stimuli. Instead, the effect of short-term exposure is consistent with the view that familiarity underlies the origin of the Western preference for consonant intervals., (PsycINFO Database Record (c) 2014 APA, all rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
26. Children's recognition of spectrally degraded cartoon voices.
- Author
-
van Heugten M, Volkova A, Trehub SE, and Schellenberg EG
- Subjects
- Acoustic Stimulation methods, Case-Control Studies, Child, Child, Preschool, Cochlear Implantation, Cochlear Implants, Deafness physiopathology, Humans, Sound Spectrography, Deafness rehabilitation, Pattern Recognition, Physiological physiology, Speech Perception physiology, Voice
- Abstract
Objectives: Although the spectrally degraded input provided by cochlear implants (CIs) is sufficient for speech perception in quiet, it poses problems for talker identification. The present study examined the ability of normally hearing (NH) children and child CI users to recognize cartoon voices while listening to spectrally degraded speech., Design: In Experiment 1, 5- to 6-year-old NH children were required to identify familiar cartoon characters in a three-alternative, forced-choice task without feedback. Children heard sentence-length utterances at six levels of spectral degradation (noise-vocoded utterances with 4, 8, 12, 16, and 24 frequency bands and the original or unprocessed stimuli). In Experiment 2, child CI users 4 to 7 years of age and a control sample of 4- to 5-year-old NH children were required to identify the unprocessed stimuli from Experiment 1., Results: NH children in Experiment 1 identified the voices significantly above chance levels, and they performed more accurately with increasing spectral information. Practice with stimuli that had greater spectral information facilitated performance on subsequent stimuli with lesser spectral information. In Experiment 2, child CI users successfully recognized the cartoon voices with slightly lower accuracy (0.90 proportion correct) than NH peers who listened to unprocessed utterances (0.97 proportion correct)., Conclusions: The findings indicate that both NH children and child CI users can identify cartoon voices under conditions of severe spectral degradation. In such circumstances, children may rely on talker-specific phonetic detail to distinguish one talker from another.
- Published
- 2014
- Full Text
- View/download PDF
27. Cross-modal signatures in maternal speech and singing.
- Author
-
Trehub SE, Plantinga J, Brcic J, and Nowicki M
- Abstract
We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6- to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined.
- Published
- 2013
- Full Text
- View/download PDF
28. Music processing similarities between sleeping newborns and alert adults: cause for celebration or concern?
- Author
-
Trehub SE
- Abstract
[This corrects the article on p. 492 in vol. 4, PMID: 23966962.].
- Published
- 2013
- Full Text
- View/download PDF
29. Do older professional musicians have cognitive advantages?
- Author
-
Amer T, Kalender B, Hasher L, Trehub SE, and Wong Y
- Subjects
- Acoustic Stimulation, Aged, Humans, Middle Aged, Music, Occupations, Photic Stimulation, Reaction Time, Reactive Inhibition, Reading, Transfer, Psychology, Cognition, Memory, Short-Term
- Abstract
The current study investigates whether long-term music training and practice are associated with enhancement of general cognitive abilities in late middle-aged to older adults. Professional musicians and non-musicians who were matched on age, education, vocabulary, and general health were compared on a near-transfer task involving auditory processing and on far-transfer tasks that measured spatial span and aspects of cognitive control. Musicians outperformed non-musicians on the near-transfer task, on most but not all of the far-transfer tasks, and on a composite measure of cognitive control. The results suggest that sustained music training or involvement is associated with improved aspects of cognitive functioning in older adults.
- Published
- 2013
- Full Text
- View/download PDF
30. A novel tool for evaluating children's musical abilities across age and culture.
- Author
-
Peretz I, Gosselin N, Nan Y, Caron-Caplette E, Trehub SE, and Béland R
- Abstract
THE PRESENT STUDY INTRODUCES A NOVEL TOOL FOR ASSESSING MUSICAL ABILITIES IN CHILDREN: The Montreal Battery of Evaluation of Musical Abilities (MBEMA). The battery, which comprises tests of memory, scale, contour, interval, and rhythm, was administered to 245 children in Montreal and 91 in Beijing (Experiment 1), and an abbreviated version was administered to an additional 85 children in Montreal (in less than 20 min; Experiment 2). All children were 6-8 years of age. Their performance indicated that both versions of the MBEMA are sensitive to individual differences and to musical training. The sensitivity of the tests extends to Mandarin-speaking children despite the fact that they show enhanced performance relative to French-speaking children. Because this Chinese advantage is not limited to musical pitch but extends to rhythm and memory, it is unlikely that it results from early exposure to a tonal language. In both cultures and versions of the tests, amount of musical practice predicts performance. Thus, the MBEMA can serve as an objective, short and up-to-date test of musical abilities in a variety of situations, from the identification of children with musical difficulties to the assessment of the effects of musical training in typically developing children of different cultures.
- Published
- 2013
- Full Text
- View/download PDF
31. Speech vs. singing: infants choose happier sounds.
- Author
-
Corbeil M, Trehub SE, and Peretz I
- Abstract
Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants' attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech vs. hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children's song spoken vs. sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children's song vs. a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing) was the principal contributor to infant attention, regardless of age.
- Published
- 2013
- Full Text
- View/download PDF
32. Child implant users' imitation of happy- and sad-sounding speech.
- Author
-
Wang DJ, Trehub SE, Volkova A, and van Lieshout P
- Abstract
Cochlear implants have enabled many congenitally or prelingually deaf children to acquire their native language and communicate successfully on the basis of electrical rather than acoustic input. Nevertheless, degraded spectral input provided by the device reduces the ability to perceive emotion in speech. We compared the vocal imitations of 5- to 7-year-old deaf children who were highly successful bilateral implant users with those of a control sample of children who had normal hearing. First, the children imitated several happy and sad sentences produced by a child model. When adults in Experiment 1 rated the similarity of imitated to model utterances, ratings were significantly higher for the hearing children. Both hearing and deaf children produced poorer imitations of happy than sad utterances because of difficulty matching the greater pitch modulation of the happy versions. When adults in Experiment 2 rated electronically filtered versions of the utterances, which obscured the verbal content, ratings of happy and sad utterances were significantly differentiated for deaf as well as hearing children. The ratings of deaf children, however, were significantly less differentiated. Although deaf children's utterances exhibited culturally typical pitch modulation, their pitch modulation was reduced relative to that of hearing children. One practical implication is that therapeutic interventions for deaf children could expand their focus on suprasegmental aspects of speech perception and production, especially intonation patterns.
- Published
- 2013
- Full Text
- View/download PDF
33. Children with bilateral cochlear implants identify emotion in speech and music.
- Author
-
Volkova A, Trehub SE, Schellenberg EG, Papsin BC, and Gordon KA
- Subjects
- Auditory Threshold, Child, Child, Preschool, Female, Humans, Male, Reference Values, Sound Spectrography, Speech Discrimination Tests, Auditory Perception, Cochlear Implants, Deafness rehabilitation, Emotions, Music, Speech Acoustics, Speech Perception
- Abstract
Objectives: This study examined the ability of prelingually deaf children with bilateral implants to identify emotion (i.e. happiness or sadness) in speech and music., Methods: Participants in Experiment 1 were 14 prelingually deaf children from 5-7 years of age who had bilateral implants and 18 normally hearing children from 4-6 years of age. They judged whether linguistically neutral utterances produced by a man and woman sounded happy or sad. Participants in Experiment 2 were 14 bilateral implant users from 4-6 years of age and the same normally hearing children as in Experiment 1. They judged whether synthesized piano excerpts sounded happy or sad., Results: Child implant users' accuracy of identifying happiness and sadness in speech was well above chance levels but significantly below the accuracy achieved by children with normal hearing. Similarly, their accuracy of identifying happiness and sadness in music was well above chance levels but significantly below that of children with normal hearing, who performed at ceiling. For the 12 implant users who participated in both experiments, performance on the speech task correlated significantly with performance on the music task and implant experience was correlated with performance on both tasks., Discussion: Child implant users' accurate identification of emotion in speech exceeded performance in previous studies, which may be attributable to fewer response alternatives and the use of child-directed speech. Moreover, child implant users' successful identification of emotion in music indicates that the relevant cues are accessible at a relatively young age.
- Published
- 2013
- Full Text
- View/download PDF
34. Cross-cultural differences in meter perception.
- Author
-
Kalender B, Trehub SE, and Schellenberg EG
- Subjects
- Adult, Female, Humans, Male, Young Adult, Auditory Perception physiology, Cross-Cultural Comparison, Music psychology, Recognition, Psychology physiology
- Abstract
We examined the influence of incidental exposure to varied metrical patterns from different musical cultures on the perception of complex metrical structures from an unfamiliar musical culture. Adults who were familiar with Western music only (i.e., simple meters) and those who also had limited familiarity with non-Western music were tested on their perception of metrical organization in unfamiliar (Turkish) music with simple and complex meters. Adults who were familiar with Western music detected meter-violating changes in Turkish music with simple meter but not in Turkish music with complex meter. Adults with some exposure to non-Western music that was unmetered or metrically complex detected meter-violating changes in Turkish music with both simple and complex meters, but they performed better on patterns with a simple meter. The implication is that familiarity with varied metrical structures, including those with a non-isochronous tactus, enhances sensitivity to the metrical organization of unfamiliar music.
- Published
- 2013
- Full Text
- View/download PDF
35. Something in the way she sings: enhanced memory for vocal melodies.
- Author
-
Weiss MW, Trehub SE, and Schellenberg EG
- Subjects
- Adult, Female, Humans, Male, Recognition, Psychology physiology, Students psychology, Young Adult, Auditory Perception physiology, Memory physiology, Music psychology, Singing physiology
- Abstract
Across species, there is considerable evidence of preferential processing for biologically significant signals such as conspecific vocalizations and the calls of individual conspecifics. Surprisingly, music cognition in human listeners is typically studied with stimuli that are relatively low in biological significance, such as instrumental sounds. The present study explored the possibility that melodies might be remembered better when presented vocally rather than instrumentally. Adults listened to unfamiliar folk melodies, with some presented in familiar timbres (voice and piano) and others in less familiar timbres (banjo and marimba). They were subsequently tested on recognition of previously heard melodies intermixed with novel melodies. Melodies presented vocally were remembered better than those presented instrumentally even though they were liked less. Factors underlying the advantage for vocal melodies remain to be determined. In line with its biological significance, vocal music may evoke increased vigilance or arousal, which in turn may result in greater depth of processing and enhanced memory for musical details.
- Published
- 2012
- Full Text
- View/download PDF
36. Behavioral methods in infancy: pitfalls of single measures.
- Author
-
Trehub SE
- Subjects
- Attention, Conditioning, Psychological, Humans, Infant, Infant Behavior, Music
- Abstract
This paper outlines the principal behavioral methods used to study music processing in infancy. The advantages of conditioning procedures are offset by high attrition rates and restrictions on the stimuli that can be used. The head-turn preference procedure is more user-friendly but poses greater interpretive challenges. In view of the multidimensional nature of infant attention, no single response measure, whether behavioral, physiological, or neural, can provide unambiguous information about music processing in infancy. Greater use of ecologically valid stimuli is likely to generate increased cooperation from infants and greater generality of the findings., (© 2012 New York Academy of Sciences.)
- Published
- 2012
- Full Text
- View/download PDF
37. Effect of cochlear implants on children's perception and production of speech prosody.
- Author
-
Nakata T, Trehub SE, and Kanda Y
- Subjects
- Adolescent, Analysis of Variance, Child, Child, Preschool, Cochlear Implants, Cues, Deafness congenital, Deafness psychology, Female, Humans, Male, Speech Acoustics, Deafness physiopathology, Emotions physiology, Phonetics, Pitch Discrimination physiology, Speech Perception physiology, Voice Quality physiology
- Abstract
Japanese 5- to 13-yr-olds who used cochlear implants (CIs) and a comparison group of normally hearing (NH) Japanese children were tested on their perception and production of speech prosody. For the perception task, they were required to judge whether semantically neutral utterances that were normalized for amplitude were spoken in a happy, sad, or angry manner. The performance of NH children was error-free. By contrast, child CI users performed well below ceiling but above chance levels on happy- and sad-sounding utterances but not on angry-sounding utterances. For the production task, children were required to imitate stereotyped Japanese utterances expressing disappointment and surprise as well as culturally typically representations of crow and cat sounds. NH 5- and 6-year-olds produced significantly poorer imitations than older hearing children, but age was unrelated to the imitation quality of child CI users. Overall, child CI user's imitations were significantly poorer than those of NH children, but they did not differ significantly from the imitations of the youngest NH group. Moreover, there was a robust correlation between the performance of child CI users on the perception and production tasks; this implies that difficulties with prosodic perception underlie their difficulties with prosodic imitation., (© 2012 Acoustical Society of America)
- Published
- 2012
- Full Text
- View/download PDF
38. Age-related changes in talker recognition with reduced spectral cues.
- Author
-
Vongpaisal T, Trehub SE, Glenn Schellenberg E, and van Lieshout P
- Subjects
- Analysis of Variance, Child, Child, Preschool, Female, Humans, Male, Noise, Recognition, Psychology, Young Adult, Aging physiology, Cues, Speech Acoustics, Speech Perception physiology
- Abstract
Temporal information provided by cochlear implants enables successful speech perception in quiet, but limited spectral information precludes comparable success in voice perception. Talker identification and speech decoding by young hearing children (5-7 yr), older hearing children (10-12 yr), and hearing adults were examined by means of vocoder simulations of cochlear implant processing. In Experiment 1, listeners heard vocoder simulations of sentences from a man, woman, and girl and were required to identify the talker from a closed set. Younger children identified talkers more poorly than older listeners, but all age groups showed similar benefit from increased spectral information. In Experiment 2, children and adults provided verbatim repetition of vocoded sentences from the same talkers. The youngest children had more difficulty than older listeners, but all age groups showed comparable benefit from increasing spectral resolution. At comparable levels of spectral degradation, performance on the open-set task of speech decoding was considerably more accurate than on the closed-set task of talker identification. Hearing children's ability to identify talkers and decode speech from spectrally degraded material sheds light on the difficulty of these domains for child implant users., (© 2012 Acoustical Society of America.)
- Published
- 2012
- Full Text
- View/download PDF
39. A comparison of the McGurk effect for spoken and sung syllables.
- Author
-
Quinto L, Thompson WF, Russo FA, and Trehub SE
- Subjects
- Adolescent, Attention, Female, Humans, Judgment, Male, Phonation, Pitch Perception, Young Adult, Face, Lipreading, Music, Optical Illusions, Pattern Recognition, Visual, Phonetics, Semantics, Speech Perception
- Abstract
The importance of visual cues in speech perception is illustrated by the McGurk effect, whereby a speaker's facial movements affect speech perception. The goal of the present study was to evaluate whether the McGurk effect is also observed for sung syllables. Participants heard and saw sung instances of the syllables /ba/ and /ga/ and then judged the syllable they perceived. Audio-visual stimuli were congruent or incongruent (e.g., auditory /ba/ presented with visual /ga/). The stimuli were presented as spoken, sung in an ascending and descending triad (C E G G E C), and sung in an ascending and descending triad that returned to a semitone above the tonic (C E G G E C#). Results revealed no differences in the proportion of fusion responses between spoken and sung conditions confirming that cross-modal phonemic information is integrated similarly in speech and song.
- Published
- 2010
- Full Text
- View/download PDF
40. Children With Cochlear implants recognize their mother's voice.
- Author
-
Vongpaisal T, Trehub SE, Schellenberg EG, van Lieshout P, and Papsin BC
- Subjects
- Adolescent, Adult, Child, Child, Preschool, Cues, Female, Humans, Male, Speech Perception, Time Factors, Cochlear Implants, Deafness psychology, Deafness rehabilitation, Mothers, Recognition, Psychology, Voice
- Abstract
Objectives: The available research indicates that cochlear implant (CI) users have difficulty in differentiating talkers, especially those of the same gender. The goal of this study was to determine whether child CI users could differentiate talkers under favorable stimulus and task conditions. We predicted that the use of a highly familiar voice, full sentences, and a game-like task with feedback would lead to higher performance levels than those achieved in previous studies of talker identification in CI users., Design: In experiment 1, 21 CI users aged 4.8 to 14.3 yrs and 16 normal-hearing (NH) 5-yr-old children were required to differentiate their mother's scripted utterances from those of an unfamiliar man, woman, and girl in a four-alternative forced-choice task with feedback. In one condition, the utterances incorporated natural prosodic variations. In another condition, nonmaternal talkers imitated the prosody of each maternal utterance. In experiment 2, 19 of the child CI users and 11 of the NH children from experiment 1 returned on a subsequent occasion to participate in a task that required them to differentiate their mother's utterances from those of unfamiliar women in a two-alternative forced-choice task with feedback. Again, one condition had natural prosodic variations and another had maternal imitations., Results: Child CI users in experiment 1 succeeded in differentiating their mother's utterances from those of a man, woman, and girl. Their performance was poorer than the performance of younger NH children, which was at ceiling. Child CI users' performance was better in the context of natural prosodic variations than in the context of imitations of maternal prosody. Child CI users in experiment 2 differentiated their mother's utterances from that of other women, and they also performed better on naturally varying samples than on imitations., Conclusions: We attribute child CI users' success on talker differentiation, even on same-gender differentiation, to their use of two types of temporal cues: variations in consonant and vowel articulation and variations in speaking rate. Moreover, we contend that child CI users' differentiation of speakers was facilitated by long-term familiarity with their mother's voice.
- Published
- 2010
- Full Text
- View/download PDF
41. Infants detect cross-modal cues to identity in speech and singing.
- Author
-
Trehub SE, Plantinga J, and Brcic J
- Subjects
- Humans, Infant, Music, Speech, Speech Perception physiology, Visual Perception physiology
- Abstract
Little is known about infants' perception of cross-modal cues to identity, but the importance of recognizing familiar individuals makes it likely that this skill would be evident early in life. Infants 6-8 months of age were tested on their ability to link dynamic cross-modal cues to the identity of unfamiliar speakers and singers. After exposure to speech or singing, infants watched two silent videos, one featuring the previously heard speaker or singer. Infants looked significantly longer at the video of the person heard previously, which indicates that they can match auditory and visual cues to the identity of unfamiliar persons.
- Published
- 2009
- Full Text
- View/download PDF
42. Music in the lives of deaf children with cochlear implants.
- Author
-
Trehub SE, Vongpaisal T, and Nakata T
- Subjects
- Adolescent, Adult, Child, Humans, Auditory Perception physiology, Cochlear Implants, Deafness, Music
- Abstract
Present-day cochlear implants provide good temporal cues and coarse spectral cues. In general, these cues are adequate for perceiving speech in quiet backgrounds and for young children's acquisition of spoken language. They are inadequate, however, for conveying the rich pitch-patterning of music. As a result, many adults who become implant users after losing their hearing find music disappointing or unacceptable. By contrast, child implant users who were born deaf or became deaf as infants or toddlers typically find music interesting and enjoyable. They recognize popular songs that they hear regularly when the test materials match critical features of the original versions. For example, they can identify familiar songs from the original recordings with words and from versions that omit the words but preserve all other cues. They also recognize theme songs from their favorite television programs when presented in original or somewhat altered form. The motivation of children with implants for listening to music or melodious speech is evident well before they understand language. Within months after receiving their implant, they prefer singing to silence. They also prefer speech in the maternal style to typical adult speech and the sounds of their native language-to-be to those of a foreign language. An important task of future research is to ascertain the relative contributions of perceptual and motivational factors to the apparent differences between child and adult implant users.
- Published
- 2009
- Full Text
- View/download PDF
43. Conventional rhythms enhance infants' and adults' perception of musical patterns.
- Author
-
Trehub SE and Hannon EE
- Subjects
- Acoustic Stimulation, Adolescent, Adult, Discrimination, Psychological physiology, Female, Humans, Infant, Male, Psychomotor Performance physiology, Young Adult, Aging physiology, Aging psychology, Auditory Perception physiology, Music psychology
- Abstract
Listeners may favour particular rhythms because of their degree of conformity to culture-specific expectations or because of perceptual constraints that are apparent early in development. In two experiments we examined adults' and 6-month-old infants' detection of subtle rhythmic and melodic changes to two sequences of tones, a conventional rhythm that musically untrained adults rated as rhythmically good and an unconventional rhythm that was rated as poor. Detection of the changes was above chance in all conditions, but adults and infants performed more accurately in the context of the conventional rhythm. Unlike adults, who benefited from rhythmic conventionality only when detecting rhythmic changes, infants benefited when detecting melodic as well as rhythmic changes. The findings point to infant and adult parallels for some aspects of rhythm processing and to integrated perception of rhythm and melody early in life.
- Published
- 2009
- Full Text
- View/download PDF
44. Developmental changes in the perception of pitch contour: distinguishing up from down.
- Author
-
Stalinski SM, Schellenberg EG, and Trehub SE
- Subjects
- Acoustic Stimulation, Adult, Age Factors, Audiometry, Child, Child, Preschool, Female, Humans, Male, Aging physiology, Auditory Pathways growth & development, Child Development, Music, Pitch Discrimination, Pitch Perception
- Abstract
Musically untrained participants in five age groups (5-, 6-, 8-, and 11-year-olds, and adults) heard sequences of three 1 s piano tones in which the first and third tones were identical (A5, or 880 Hz) but the middle tone was displaced upward or downward in pitch. Their task was to identify whether the middle tone was higher or lower than the other two tones. In experiment 1, 5-year-olds successfully identified upward and downward shifts of 4, 2, 1, 0.5, and 0.3 semitones. In experiment 2, older children (6-, 8-, and 11-year-olds) and adults successfully identified the same shifts as well as a smaller shift (0.1 semitone). For all age groups, performance accuracy decreased as the size of the shift decreased. Performance improved from 5 to 8 years of age, reaching adult levels at 8 years.
- Published
- 2008
- Full Text
- View/download PDF
45. Cross-cultural perspectives on pitch memory.
- Author
-
Trehub SE, Glenn Schellenberg E, and Nakata T
- Subjects
- Age Factors, Child, Child, Preschool, Female, Humans, Japan, Male, Practice, Psychological, Speech Perception, Cross-Cultural Comparison, Mental Recall, Music, Pitch Perception
- Abstract
We examined effects of age and culture on children's memory for the pitch level of familiar music. Canadian 9- and 10-year-olds distinguished the original pitch level of familiar television theme songs from foils that were pitch-shifted by one semitone, whereas 5- to 8-year-olds failed to do so (Experiment 1). In contrast, Japanese 5- and 6-year-olds distinguished the pitch-shifted foils from the originals, performing significantly better than same-age Canadian children (Experiment 2). Moreover, Japanese 6-year-olds were more accurate than their 5-year-old counterparts. These findings challenge the prevailing view of enhanced pitch memory during early life. We consider factors that may account for Japanese children's superior performance such as their use of a pitch accent language (Japanese) rather than a stress accent language (English) and their experience with musical pitch labels.
- Published
- 2008
- Full Text
- View/download PDF
46. Signature tunes in mothers' speech to infants.
- Author
-
Bergeson TR and Trehub SE
- Subjects
- Adult, Female, Humans, Infant, Mother-Child Relations, Pitch Perception, Speech, Verbal Behavior
- Abstract
The purpose of the present investigation was to determine whether mothers use discernible tunes (i.e., specific interval sequences) in their speech to infants and whether such tunes are individually distinctive. Mothers were recorded speaking with their infants on two occasions separated by 1 week or more. Examination of the tunes of each mother revealed discernible tunes and frequent repetitions of tunes within and across sessions. Comparisons of utterances with the most common pitch contour (i.e., rising), both within and across mothers, revealed interval patterns that were individually distinctive, or unique. The findings confirm the prominence of tunes and the presence of signature tunes in maternal speech to infants.
- Published
- 2007
- Full Text
- View/download PDF
47. Acquisition of early words from single-word and sentential contexts.
- Author
-
Trehub SE and Shenfield T
- Subjects
- Acoustic Stimulation, Age Factors, Analysis of Variance, Female, Humans, Infant, Male, Photic Stimulation, Sex Factors, Time Factors, Child Language, Verbal Learning physiology, Vocabulary
- Abstract
Toddlers 15 and 18 months of age were exposed to audiovisual recordings of two novel words paired with novel toys. The words were presented in familiar sentence frames or in isolation. Linguistic context had a greater effect on younger than on older infants. Specifically, 15-month-old boys exhibited successful learning only in the context of single words, and 15-month-old girls did so only for words presented in sentences. Older infants acquired the new words from both contexts, and they learned more rapidly than younger infants. Receptive and expressive vocabulary made no independent contribution to performance.
- Published
- 2007
- Full Text
- View/download PDF
48. Infants' memory for musical performances.
- Author
-
Volkova A, Trehub SE, and Schellenberg EG
- Subjects
- Humans, Infant, Retention, Psychology, Child Development, Memory, Mental Recall, Music, Pitch Perception, Psychology, Child
- Abstract
We evaluated 6- and 7-month-olds' preference and memory for expressive recordings of sung lullabies. In Experiment 1, both age groups preferred lower-pitched to higher-pitched renditions of unfamiliar lullabies. In Experiment 2, infants were tested after 2 weeks of daily exposure to a lullaby at one pitch level. Seven-month-olds listened significantly longer to the lullaby at a novel pitch level than at the original pitch level. Six-month-olds showed no preference but their low-pitch preference was eliminated. We conclude that infants' memory for musical performances is enhanced by the ecological validity of the materials. Moreover, infants' pitch preferences are influenced by their previous exposure and by the nature of the music.
- Published
- 2006
- Full Text
- View/download PDF
49. Song recognition by children and adolescents with cochlear implants.
- Author
-
Vongpaisal T, Trehub SE, and Schellenberg EG
- Subjects
- Adolescent, Adult, Auditory Perception physiology, Case-Control Studies, Child, Child, Preschool, Female, Hearing Loss therapy, Humans, Male, Task Performance and Analysis, Cochlear Implants, Hearing Loss physiopathology, Music, Pitch Perception, Recognition, Psychology physiology
- Abstract
Purpose: To assess song recognition and pitch perception in prelingually deaf individuals with cochlear implants (CIs)., Method: Fifteen hearing children (5-8 years) and 15 adults heard different versions of familiar popular songs-original (vocal + instrumental), original instrumental, and synthesized melody versions-and identified the song in a closed-set task (Experiment 1). Ten CI users (8-18 years) and age-matched hearing listeners performed the same task (Experiment 2). Ten CI users (8-19 years) and 10 hearing 8-years-olds were required to detect pitch changes in repeating-tone contexts (Experiment 3). Finally, 8 CI users (6-19 years) and 13 hearing 5-year-olds were required to detect subtle pitch changes in a more challenging melodic context (Experiment 4)., Results: CI users performed more poorly than hearing listeners in all conditions. They succeeded in identifying the original and instrumental versions of familiar recorded songs, and they evaluated them favorably, but they could not identify the melody versions. Although CI users could detect a 0.5-semitone change in the simple context, they failed to detect a 1-semitone change in the more difficult melodic context., Conclusion: Current implant processors provide insufficient spectral detail for some aspects of music perception, but they do not preclude young implant users' enjoyment of music.
- Published
- 2006
- Full Text
- View/download PDF
50. Infant music perception: domain-general or domain-specific mechanisms?
- Author
-
Trehub SE and Hannon EE
- Subjects
- Humans, Infant, Pitch Perception, Time Perception, Auditory Perception physiology, Child Development, Music psychology, Psychology, Child
- Abstract
We review the literature on infants' perception of pitch and temporal patterns, relating it to comparable research with human adult and non-human listeners. Although there are parallels in relative pitch processing across age and species, there are notable differences. Infants accomplish such tasks with ease, but non-human listeners require extensive training to achieve very modest levels of performance. In general, human listeners process auditory sequences in a holistic manner, and non-human listeners focus on absolute aspects of individual tones. Temporal grouping processes and categorization on the basis of rhythm are evident in non-human listeners and in human infants and adults. Although synchronization to sound patterns is thought to be uniquely human, tapping to music, synchronous firefly flashing, and other cyclic behaviors can be described by similar mathematical principles. We conclude that infants' music perception skills are a product of general perceptual mechanisms that are neither music- nor species-specific. Along with general-purpose mechanisms for the perceptual foundations of music, we suggest unique motivational mechanisms that can account for the perpetuation of musical behavior in all human societies.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.