8 results on '"Ramirez, Naja Ferjan"'
Search Results
2. How Chatty Are Daddies? An Exploratory Study of Infants' Language Environments
- Author
-
Shapiro, Naomi Tachikawa, Hippe, Daniel S., and Ramirez, Naja Ferjan
- Subjects
Father and child -- Psychological aspects ,Health - Abstract
Purpose: Fathers play a critical but underresearched role in their children's cognitive and linguistic development. Focusing on two-parent families with a mother and a father, the present longitudinal study explores the amount of paternal input infants hear during the first 2 years of life, how this input changes over time, and how it relates to child volubility. We devote special attention to parentese, a near-universal style of infant-directed speech, distinguished by its higher pitch, slower tempo, and exaggerated intonation. Method: We examined the daylong recordings of the same 23 infants at ages 6, 10, 14, 18, and 24 months, given English-speaking families. The infants were recorded in the presence of their parents (mother-father dyads), who were predominantly White and ranged from mid to high socioeconomic status (SES). We analyzed the effects of parent gender and child age on adult word counts and parentese, as well as the effects of maternal and paternal word counts and parentese on child vocalizations. Results: On average, the infants were exposed to 46.8% fewer words and 51.9% less parentese from fathers than from mothers, even though paternal parentese grew at a 2.8-times faster rate as the infants aged. An asymmetry emerged where maternal word counts and paternal parentese predicted child vocalizations, but paternal word counts and maternal parentese did not. Conclusions: While infants may hear less input from their fathers than their mothers in predominantly White, mid-to-high SES, English-speaking households, paternal parentese still plays a unique role in their linguistic development. Future research on sources of variability in child language outcomes should thus control for parental differences since parents' language can differ substantially and differentially predict child language., Sociocultural frameworks have long emphasized child development as a socially mediated process, in which caregivers scaffold their children's cognitive and linguistic development through social interactions (e.g., Bruner, 1981; Kuhl, 2007, [...]
- Published
- 2021
- Full Text
- View/download PDF
3. Comparison of speech and music input in North American infants' home environment over the first 2 years of life.
- Author
-
Hippe, Lindsay, Hennessy, Victoria, Ramirez, Naja Ferjan, and Zhao, T. Christina
- Subjects
LANGUAGE acquisition ,AUDITORY pathways ,INFANT development ,SPEECH ,ELECTRONIC equipment - Abstract
Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English‐learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen‐science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in‐person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj%5fsEaBMN4 Research Highlights: This study is the first to compare music input to speech input in infants' natural home environment across infancy.We utilized a crowdsourcing approach to annotate a longitudinal dataset of daylong audio recordings collected in North American home environments.Our main results show that infants overall receive significantly more speech input than music input. This gap widens as the infants get older.Our results also showed that the music input was largely from electronic devices and not intended for the infants, a pattern opposite to speech input. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. The Brain Science of Bilingualism
- Author
-
Ramirez, Naja Ferjan and Kuhl, Patricia
- Published
- 2017
5. The Initial Stages of First-Language Acquisition Begun in Adolescence: When Late Looks Early
- Author
-
Ramirez, Naja Ferjan, Lieberman, Amy M., and Mayberry, Rachel I.
- Abstract
Children typically acquire their native language naturally and spontaneously at a very young age. The emergence of early grammar can be predicted from children's vocabulary size and composition (Bates et al., 1994; Bates, Bretherton & Snyder, 1998; Bates & Goodman, 1997). One central question in language research is understanding what causes the changes in early language acquisition. Some researchers argue that the qualitative and quantitative shifts in word learning simply reflect the changing character of the child's cognitive maturity (for example, Gentner, 1982), while others argue that the trajectory of early language acquisition is driven by the child's growing familiarity with the language (Gillette, Gleitman, Gleitman & Lederer, 1999; Snedeker & Gleitman, 2004). These hypotheses are difficult to adjudicate because language acquisition in virtually all hearing children begins from birth and occurs simultaneously with cognitive development and brain maturation. The acquisition of sign languages, in contrast, is frequently delayed until older ages. In the USA, over 90% of deaf children are born to hearing parents who do not use sign language (Schein, 1989). As a result, deaf children are often exposed to sign language as a first language at a range of ages well beyond infancy (Mayberry, 2007). In rare cases, some deaf individuals are isolated from all linguistic input until adolescence when they start receiving special services and begin to learn sign language through immersion (Morford, 2003). Case studies of language acquisition in such extreme late first-language (L1) learners provide a unique opportunity to investigate first-language learning. The current study investigates three cases of young teens who are in the early stages of acquiring American Sign Language (ASL) as a first language, to determine what first-language acquisition in adolescence looks like.
- Published
- 2013
- Full Text
- View/download PDF
6. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language.
- Author
-
Ramirez, Naja Ferjan, Leonard, Matthew K., Davenport, Tristan S., Torres, Christina, Halgren, Eric, and Mayberry, Rachel I.
- Published
- 2016
- Full Text
- View/download PDF
7. Neural stages of spoken, written, and signed word processing in beginning second language learners.
- Author
-
Leonard, Matthew K., Ramirez, Naja Ferjan, Torres, Christina, Hatrak, Marla, Mayberry, Rachel I., and Halgren, Eric
- Subjects
WORD recognition ,SECOND language acquisition ,ENGLISH language education ,SIGN language ,MAGNETOENCEPHALOGRAPHY ,MAGNETIC resonance imaging of the brain - Abstract
We combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine how sensory modality, language type, and language proficiency interact during two fundamental stages of word processing: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
8. Signed Words in the Congenitally Deaf Evoke Typical Late Lexicosemantic Responses with No Early Visual Responses in Left Superior Temporal Cortex.
- Author
-
Leonard, Matthew K., Ramirez, Naja Ferjan, Torres, Christina, Travis, Katherine E., Hatrak, Marla, Mayberry, Rachel I., and Halgren, Eric
- Subjects
- *
GENETICS of deafness , *NATIVE language , *NEUROLINGUISTICS , *BRAIN anatomy , *MAGNETIC resonance imaging of the brain , *HEARING , *AUDITORY cortex - Abstract
Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. Wefound that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.