Back to Search
Start Over
Statistical learning models of early phonetic acquisition struggle with child-centered audio data
- Publication Year :
- 2023
- Publisher :
- Center for Open Science, 2023.
-
Abstract
- International audience; Infants learn their native language(s) at an amazing speed. Before they even talk, their perception adapts to the language(s) they hear. However, the mechanisms responsible for this perceptual attunement still remain unclear. A long tradition in linguistics points to the importance of specialized language mechanisms that would allow us to quickly and effortlessly learn from the language(s) we are exposed to. However, the currently dominant explanation for perceptual attunement posits that infants apply a domain-general learning mechanism consisting in learning statistical regularities from the speech stream they hear, and which may be found in learning across domains and across species. Critically, the feasibil- ity of employing purely domain-general statistical learning mechanisms has only been demonstrated with computational models on unrealistic and simplified input. This paper presents the first attempt to study perceptual attunement with 2,000 hours of ecological child-centered recordings in American English and Metropolitan French. We show that, when applied on ecologically-valid data, generic learning mecha- nisms develop a language-relevant perceptual space but fail to show evidence for perceptual attunement. It is only when supplemented with domain-specific audio filtering and augmentation mechanisms that computational models show a significant attunement to the language they have been exposed to. Hence, we conclude that, when learning from ecological audio, domain-specific mechanisms may be necessary to guide early language learning in the wild even if the learning itself is done through generic mechanisms. We anticipate our work to be a starting point for ecologically-valid computational models of perceptual attunement in other domains and species.
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....0b1363e0a1d253f7f6945bad9b79a776