1. Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
- Author
-
Bruno L. Giordano, Michele Esposito, Giancarlo Valente, Elia Formisano, Audition, RS: FPN CN 2, RS: FSE BISS, RS: FPN MaCSBio, RS: FSE MaCSBio, Institut de Neurosciences de la Timone (INT), Aix Marseille Université (AMU)-Centre National de la Recherche Scientifique (CNRS), Maastricht University [Maastricht], ANR-21-CE37-0027,SoundBrainSem,Représentation des sons naturels dans le cerveau humain : Transformation de l'acoustique en sémantique des sources sonores dans l'environnement(2021), and ANR-16-CONV-0002,ILCB,ILCB: Institute of Language Communication and the Brain(2016)
- Subjects
General Neuroscience ,[SCCO.NEUR]Cognitive science/Neuroscience - Abstract
Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl’s gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl’s gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.
- Published
- 2023