46 results on '"Charest I"'
Search Results
2. OR02-2BINGE DRINKING INFLUENCES THE CEREBRAL PROCESSING OF AFFECTIVE VOICES IN YOUNG ADULTS
- Author
-
Maurage, P., Bestelmeyer, P. E. G., Rouger, J., Charest, I., and Belin, P.
- Published
- 2014
- Full Text
- View/download PDF
3. Auditory response of the Temporal Voice Areas (TVA) predicts memory performance for voices, but not bells.
- Author
-
Watson, R., Crabbe, F., Quinones, I., Charest, I., Bestelmeyer, P., Latinus, M., and Belin, P.
- Published
- 2009
- Full Text
- View/download PDF
4. Cerebral correlates of vocal attractiveness perception
- Author
-
Bestelmeyer, P E, Latinus, M, Rouger, J, Bruckert, L, Charest, I, Crabbe, F, and Belin, P
- Published
- 2009
- Full Text
- View/download PDF
5. Anatomical connectivity between face- and voice-selective cortex
- Author
-
Quinones, I, Latinus, M, Charest, I, Crabbe, F, Bobes, A M, and Belin, P
- Published
- 2009
- Full Text
- View/download PDF
6. Multivariate pattern analyses of functional MR imaging of the voice neural network
- Author
-
Rouger, J, Charest, I, De Martino, F, Formisano, E, and Belin, P
- Published
- 2009
- Full Text
- View/download PDF
7. Investigating the representation of voice gender using a continuous carry-over fMRI design
- Author
-
Charest, I, Pernet, C, Crabbe, F, and Belin, P
- Published
- 2009
- Full Text
- View/download PDF
8. Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision
- Author
-
Spoerer, C.J., Kietzmann, T.C., Mehrer, J., Charest, I., Kriegeskorte, N., Spoerer, C.J., Kietzmann, T.C., Mehrer, J., Charest, I., and Kriegeskorte, N.
- Abstract
Contains fulltext : 225297.pdf (publisher's version ) (Open Access), Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model's reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition. Author summary: Deep neural networks provide the best current models of biological vision and achieve the highest performance in computer vision. Inspired by the primate brain, these models transform the image signals through a sequence of stages, leading to recognition. Unlike brains in which outputs of a given computation are fed back into the same com
- Published
- 2020
9. Animacy Dimensions Ratings and Approach for Decorrelating Stimuli Dimensions
- Author
-
Jozwik, KM, Charest, I, Kriegeskorte, N, and Cichy, RM
- Abstract
The distinction between animate and inanimate objects plays an important role in object recognition. The following 5 dimensions were shown in previous studies to be important for animacy perception independently: “being alive”, “looking like an animal”, “having mobility”, “having agency” and “being unpredictable”. However, it is not known how these dimensions in combination determine how we perceive animacy. To investigate, we created a stimulus set (M = 300) with almost all dimension combinations for which we acquired behavioural ratings on the 5 dimensions. We show that subjects (N = 26) are consistent in animacy ratings (r = 0.6) and that “being alive” and “having agency” dimensions are highly correlated (r = 0.62). To design a stimulus sub-set that is decorrelated on animacy dimensions for future fMRI and EGG experiments we used a genetic algorithm. Our approach proved to be successful in stimuli selection (max r = 0.35, compared to max r = 0.59 when using a random search). In summary, our study systematically investigates animacy dimensions, provides new insights in animacy perception, and presents an approach for decorrelating stimuli dimensions that can be useful for other studies.
- Published
- 2018
- Full Text
- View/download PDF
10. Testing the fast consolidation hypothesis of retrieval-mediated learning
- Author
-
Ferreira, C.S., primary, Charest, I., additional, and Wimber, M., additional
- Published
- 2018
- Full Text
- View/download PDF
11. Conjoint and independent neural coding of bimodal face/voice identity investigated with fMRI
- Author
-
Latinus, Marianne, Joassin, F, Watson, R, Charest, I, Love, Scott, McAleer, Phil, Belin, P, School of Psychology, and University of Queensland [Brisbane]
- Subjects
multisensory ,fmri ,[SDV]Life Sciences [q-bio] ,ComputingMilieux_MISCELLANEOUS ,identity - Abstract
National audience
- Published
- 2011
12. OR02-2 * BINGE DRINKING INFLUENCES THE CEREBRAL PROCESSING OF AFFECTIVE VOICES IN YOUNG ADULTS
- Author
-
Maurage, P., primary, Bestelmeyer, P. E. G., additional, Rouger, J., additional, Charest, I., additional, and Belin, P., additional
- Published
- 2014
- Full Text
- View/download PDF
13. Representational geometry measures predict categorisation speed for particular visual objects
- Author
-
Charest, I., primary, Carlson, T. A., additional, and Kriegeskorte, N., additional
- Published
- 2014
- Full Text
- View/download PDF
14. Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design
- Author
-
Charest, I., primary, Pernet, C., additional, Latinus, M., additional, Crabbe, F., additional, and Belin, P., additional
- Published
- 2012
- Full Text
- View/download PDF
15. Impaired Emotional Facial Expression Decoding in Alcoholism is Also Present for Emotional Prosody and Body Postures
- Author
-
Maurage, P., primary, Campanella, S., additional, Philippot, P., additional, Charest, I., additional, Martin, S., additional, and de Timary, P., additional
- Published
- 2009
- Full Text
- View/download PDF
16. Differential Roles With Aging of Visual and Proprioceptive Afferent Information for Fine Motor Control
- Author
-
Proteau, L., primary, Charest, I., additional, and Chaput, S., additional
- Published
- 1994
- Full Text
- View/download PDF
17. Electrophysiological evidence for an early processing of human voices
- Author
-
Fillion-Bilodeau Sarah, Latinus Marianne, Quiñones Ileana, Rousselet Guillaume A, Pernet Cyril R, Charest Ian, Chartrand Jean-Pierre, and Belin Pascal
- Subjects
Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 ,Neurophysiology and neuropsychology ,QP351-495 - Abstract
Abstract Background Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed. Results ERPs were recorded from 32 healthy volunteers who listened to 200 ms long stimuli from three sound categories - voices, bird songs and environmental sounds - whilst performing a pure-tone detection task. ERP analyses revealed voice/non-voice amplitude differences emerging as early as 164 ms post stimulus onset and peaking around 200 ms on fronto-temporal (positivity) and occipital (negativity) electrodes. Conclusion Our electrophysiological results suggest a rapid brain discrimination of sounds of voice, termed the "fronto-temporal positivity to voices" (FTPV), at latencies comparable to the well-known face-preferential N170.
- Published
- 2009
- Full Text
- View/download PDF
18. Reconstructing Spatiotemporal Trajectories of Visual Object Memories in the Human Brain.
- Author
-
Lifanov-Carr J, Griffiths BJ, Linde-Domingo J, Ferreira CS, Wilson M, Mayhew SD, Charest I, and Wimber M
- Subjects
- Humans, Male, Female, Young Adult, Adult, Visual Perception physiology, Memory physiology, Photic Stimulation methods, Magnetic Resonance Imaging, Mental Recall physiology, Brain Mapping, Brain physiology, Brain diagnostic imaging, Electroencephalography methods
- Abstract
How the human brain reconstructs, step-by-step, the core elements of past experiences is still unclear. Here, we map the spatiotemporal trajectories along which visual object memories are reconstructed during associative recall. Specifically, we inquire whether retrieval reinstates feature representations in a copy-like but reversed direction with respect to the initial perceptual experience, or alternatively, this reconstruction involves format transformations and regions beyond initial perception. Participants from two cohorts studied new associations between verbs and randomly paired object images, and subsequently recalled the objects when presented with the corresponding verb cue. We first analyze multivariate fMRI patterns to map where in the brain high- and low-level object features can be decoded during perception and retrieval, showing that retrieval is dominated by conceptual features, represented in comparatively late visual and parietal areas. A separately acquired EEG dataset is then used to track the temporal evolution of the reactivated patterns using similarity-based EEG-fMRI fusion. This fusion suggests that memory reconstruction proceeds from anterior frontotemporal to posterior occipital and parietal regions, in line with a conceptual-to-perceptual gradient but only partly following the same trajectories as during perception. Specifically, a linear regression statistically confirms that the sequential activation of ventral visual stream regions is reversed between image perception and retrieval. The fusion analysis also suggests an information relay to frontoparietal areas late during retrieval. Together, the results shed light onto the temporal dynamics of memory recall and the transformations that the information undergoes between the initial experience and its later reconstruction from memory., Competing Interests: The authors declare no competing financial interests., (Copyright © 2024 Lifanov-Carr et al.)
- Published
- 2024
- Full Text
- View/download PDF
19. Neural computations in prosopagnosia.
- Author
-
Faghel-Soubeyrand S, Richoz AR, Waeber D, Woodhams J, Caldara R, Gosselin F, and Charest I
- Subjects
- Humans, Female, Adult, Brain physiopathology, Neural Networks, Computer, Middle Aged, Pattern Recognition, Visual physiology, Male, Models, Neurological, Prosopagnosia physiopathology
- Abstract
We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics., (© The Author(s) 2024. Published by Oxford University Press.)
- Published
- 2024
- Full Text
- View/download PDF
20. Decoding face recognition abilities in the human brain.
- Author
-
Faghel-Soubeyrand S, Ramon M, Bamps E, Zoia M, Woodhams J, Richoz AR, Caldara R, Gosselin F, and Charest I
- Abstract
Why are some individuals better at recognizing faces? Uncovering the neural mechanisms supporting face recognition ability has proven elusive. To tackle this challenge, we used a multimodal data-driven approach combining neuroimaging, computational modeling, and behavioral tests. We recorded the high-density electroencephalographic brain activity of individuals with extraordinary face recognition abilities-super-recognizers-and typical recognizers in response to diverse visual stimuli. Using multivariate pattern analyses, we decoded face recognition abilities from 1 s of brain activity with up to 80% accuracy. To better understand the mechanisms subtending this decoding, we compared representations in the brains of our participants with those in artificial neural network models of vision and semantics, as well as with those involved in human judgments of shape and meaning similarity. Compared to typical recognizers, we found stronger associations between early brain representations of super-recognizers and midlevel representations of vision models as well as shape similarity judgments. Moreover, we found stronger associations between late brain representations of super-recognizers and representations of the artificial semantic model as well as meaning similarity judgments. Overall, these results indicate that important individual variations in brain processing, including neural computations extending beyond purely visual processes, support differences in face recognition abilities. They provide the first empirical evidence for an association between semantic computations and face recognition abilities. We believe that such multimodal data-driven approaches will likely play a critical role in further revealing the complex nature of idiosyncratic face recognition in the human brain., (© The Author(s) 2024. Published by Oxford University Press on behalf of National Academy of Sciences.)
- Published
- 2024
- Full Text
- View/download PDF
21. Re-expression of CA1 and entorhinal activity patterns preserves temporal context memory at long timescales.
- Author
-
Zou F, Wanjia G, Allen EJ, Wu Y, Charest I, Naselaris T, Kay K, Kuhl BA, Hutchinson JB, and DuBrow S
- Subjects
- Humans, Temporal Lobe diagnostic imaging, Magnetic Resonance Imaging, Recognition, Psychology, Hippocampus, Entorhinal Cortex
- Abstract
Converging, cross-species evidence indicates that memory for time is supported by hippocampal area CA1 and entorhinal cortex. However, limited evidence characterizes how these regions preserve temporal memories over long timescales (e.g., months). At long timescales, memoranda may be encountered in multiple temporal contexts, potentially creating interference. Here, using 7T fMRI, we measured CA1 and entorhinal activity patterns as human participants viewed thousands of natural scene images distributed, and repeated, across many months. We show that memory for an image's original temporal context was predicted by the degree to which CA1/entorhinal activity patterns from the first encounter with an image were re-expressed during re-encounters occurring minutes to months later. Critically, temporal memory signals were dissociable from predictors of recognition confidence, which were carried by distinct medial temporal lobe expressions. These findings suggest that CA1 and entorhinal cortex preserve temporal memories across long timescales by coding for and reinstating temporal context information., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
22. Improving the accuracy of single-trial fMRI response estimates using GLMsingle.
- Author
-
Prince JS, Charest I, Kurzawski JW, Pyles JA, Tarr MJ, and Kay KN
- Subjects
- Humans, Reproducibility of Results, Neuroimaging, Signal-To-Noise Ratio, Magnetic Resonance Imaging, Artificial Intelligence
- Abstract
Advances in artificial intelligence have inspired a paradigm shift in human neuroscience, yielding large-scale functional magnetic resonance imaging (fMRI) datasets that provide high-resolution brain responses to thousands of naturalistic visual stimuli. Because such experiments necessarily involve brief stimulus durations and few repetitions of each stimulus, achieving sufficient signal-to-noise ratio can be a major challenge. We address this challenge by introducing GLMsingle , a scalable, user-friendly toolbox available in MATLAB and Python that enables accurate estimation of single-trial fMRI responses (glmsingle.org). Requiring only fMRI time-series data and a design matrix as inputs, GLMsingle integrates three techniques for improving the accuracy of trial-wise general linear model (GLM) beta estimates. First, for each voxel, a custom hemodynamic response function (HRF) is identified from a library of candidate functions. Second, cross-validation is used to derive a set of noise regressors from voxels unrelated to the experiment. Third, to improve the stability of beta estimates for closely spaced trials, betas are regularized on a voxel-wise basis using ridge regression. Applying GLMsingle to the Natural Scenes Dataset and BOLD5000, we find that GLMsingle substantially improves the reliability of beta estimates across visually-responsive cortex in all subjects. Comparable improvements in reliability are also observed in a smaller-scale auditory dataset from the StudyForrest experiment. These improvements translate into tangible benefits for higher-level analyses relevant to systems and cognitive neuroscience. We demonstrate that GLMsingle: (i) helps decorrelate response estimates between trials nearby in time; (ii) enhances representational similarity between subjects within and across datasets; and (iii) boosts one-versus-many decoding of visual stimuli. GLMsingle is a publicly available tool that can significantly improve the quality of past, present, and future neuroimaging datasets sampling brain activity across many experimental conditions., Competing Interests: JP, IC, JK, JP, MT, KK No competing interests declared, (© 2022, Prince et al.)
- Published
- 2022
- Full Text
- View/download PDF
23. Disentangling five dimensions of animacy in human brain and behaviour.
- Author
-
Jozwik KM, Najarro E, van den Bosch JJF, Charest I, Cichy RM, and Kriegeskorte N
- Subjects
- Humans, Brain Mapping, Magnetic Resonance Imaging methods, Judgment physiology, Pattern Recognition, Visual physiology, Brain diagnostic imaging, Brain physiology
- Abstract
Distinguishing animate from inanimate things is of great behavioural importance. Despite distinct brain and behavioural responses to animate and inanimate things, it remains unclear which object properties drive these responses. Here, we investigate the importance of five object dimensions related to animacy ("being alive", "looking like an animal", "having agency", "having mobility", and "being unpredictable") in brain (fMRI, EEG) and behaviour (property and similarity judgements) of 19 participants. We used a stimulus set of 128 images, optimized by a genetic algorithm to disentangle these five dimensions. The five dimensions explained much variance in the similarity judgments. Each dimension explained significant variance in the brain representations (except, surprisingly, "being alive"), however, to a lesser extent than in behaviour. Different brain regions sensitive to animacy may represent distinct dimensions, either as accessible perceptual stepping stones toward detecting whether something is alive or because they are of behavioural importance in their own right., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
24. Researcher perspectives on ethics considerations in epigenetics: an international survey.
- Author
-
Dupras C, Knoppers T, Palmour N, Beauchamp E, Liosi S, Siebert R, Berner AM, Beck S, Charest I, and Joly Y
- Subjects
- Humans, Surveys and Questionnaires, DNA Methylation, Epigenomics
- Abstract
Over the past decade, bioethicists, legal scholars and social scientists have started to investigate the potential implications of epigenetic research and technologies on medicine and society. There is growing literature discussing the most promising opportunities, as well as arising ethical, legal and social issues (ELSI). This paper explores the views of epigenetic researchers about some of these discussions. From January to March 2020, we conducted an online survey of 189 epigenetic researchers working in 31 countries. We questioned them about the scope of their field, opportunities in different areas of specialization, and ELSI in the conduct of research and knowledge translation. We also assessed their level of concern regarding four emerging non-medical applications of epigenetic testing-i.e., in life insurance, forensics, immigration and direct-to-consumer testing. Although there was strong agreement on DNA methylation, histone modifications, 3D structure of chromatin and nucleosomes being integral elements of the field, there was considerable disagreement on transcription factors, RNA interference, RNA splicing and prions. The most prevalent ELSI experienced or witnessed by respondents were in obtaining timely access to epigenetic data in existing databases, and in the communication of epigenetic findings by the media. They expressed high levels of concern regarding non-medical applications of epigenetics, echoing cautionary appraisals in the social sciences and humanities literature., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
25. Sleep spindles track cortical learning patterns for memory consolidation.
- Author
-
Petzka M, Chatburn A, Charest I, Balanos GM, and Staresina BP
- Subjects
- Electroencephalography, Humans, Learning, Polysomnography, Sleep, Memory Consolidation
- Abstract
Memory consolidation-the transformation of labile memory traces into stable long-term representations-is facilitated by post-learning sleep. Computational and biophysical models suggest that sleep spindles may play a key mechanistic role for consolidation, igniting structural changes at cortical sites involved in prior learning. Here, we tested the resulting prediction that spindles are most pronounced over learning-related cortical areas and that the extent of this learning-spindle overlap predicts behavioral measures of memory consolidation. Using high-density scalp electroencephalography (EEG) and polysomnography (PSG) in healthy volunteers, we first identified cortical areas engaged during a temporospatial associative memory task (power decreases in the alpha/beta frequency range, 6-20 Hz). Critically, we found that participant-specific topographies (i.e., spatial distributions) of post-learning sleep spindle amplitude correlated with participant-specific learning topographies. Importantly, the extent to which spindles tracked learning patterns further predicted memory consolidation across participants. Our results provide empirical evidence for a role of post-learning sleep spindles in tracking learning networks, thereby facilitating memory consolidation., Competing Interests: Declaration of interests The authors declare no competing interests., (Copyright © 2022 The Author(s). Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
26. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence.
- Author
-
Allen EJ, St-Yves G, Wu Y, Breedlove JL, Prince JS, Dowdle LT, Nau M, Caron B, Pestilli F, Charest I, Hutchinson JB, Naselaris T, and Kay K
- Subjects
- Artificial Intelligence, Brain diagnostic imaging, Brain physiology, Brain Mapping methods, Humans, Neural Networks, Computer, Recognition, Psychology, Cognitive Neuroscience, Magnetic Resonance Imaging methods
- Abstract
Extensive sampling of neural activity during rich cognitive phenomena is critical for robust understanding of brain function. Here we present the Natural Scenes Dataset (NSD), in which high-resolution functional magnetic resonance imaging responses to tens of thousands of richly annotated natural scenes were measured while participants performed a continuous recognition task. To optimize data quality, we developed and applied novel estimation and denoising techniques. Simple visual inspections of the NSD data reveal clear representational transformations along the ventral visual pathway. Further exemplifying the inferential power of the dataset, we used NSD to build and train deep neural network models that predict brain activity more accurately than state-of-the-art models from computer vision. NSD also includes substantial resting-state and diffusion data, enabling network neuroscience perspectives to constrain and enhance models of perception and memory. Given its unprecedented scale, quality and breadth, NSD opens new avenues of inquiry in cognitive neuroscience and artificial intelligence., (© 2021. The Author(s), under exclusive licence to Springer Nature America, Inc.)
- Published
- 2022
- Full Text
- View/download PDF
27. The hippocampus as the switchboard between perception and memory.
- Author
-
Treder MS, Charest I, Michelmann S, Martín-Buro MC, Roux F, Carceller-Benito F, Ugalde-Canitrot A, Rollings DT, Sawlani V, Chelvarajah R, Wimber M, Hanslmayr S, and Staresina BP
- Subjects
- Adult, Brain Mapping methods, Case-Control Studies, Electroencephalography, Epilepsy, Female, Humans, Male, Middle Aged, Prefrontal Cortex physiology, Young Adult, Hippocampus physiology, Memory physiology, Visual Perception physiology
- Abstract
Adaptive memory recall requires a rapid and flexible switch from external perceptual reminders to internal mnemonic representations. However, owing to the limited temporal or spatial resolution of brain imaging modalities used in isolation, the hippocampal-cortical dynamics supporting this process remain unknown. We thus employed an object-scene cued recall paradigm across two studies, including intracranial electroencephalography (iEEG) and high-density scalp EEG. First, a sustained increase in hippocampal high gamma power (55 to 110 Hz) emerged 500 ms after cue onset and distinguished successful vs. unsuccessful recall. This increase in gamma power for successful recall was followed by a decrease in hippocampal alpha power (8 to 12 Hz). Intriguingly, the hippocampal gamma power increase marked the moment at which extrahippocampal activation patterns shifted from perceptual cue toward mnemonic target representations. In parallel, source-localized EEG alpha power revealed that the recall signal progresses from hippocampus to posterior parietal cortex and then to medial prefrontal cortex. Together, these results identify the hippocampus as the switchboard between perception and memory and elucidate the ensuing hippocampal-cortical dynamics supporting the recall process., Competing Interests: The authors declare no competing interest., (Copyright © 2021 the Author(s). Published by PNAS.)
- Published
- 2021
- Full Text
- View/download PDF
28. Diagnostic Features for Human Categorisation of Adult and Child Faces.
- Author
-
Faghel-Soubeyrand S, Kloess JA, Gosselin F, Charest I, and Woodhams J
- Abstract
Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception - position, spatial-frequency (SF), and orientation - are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that SF diagnosticity showed a U-shaped pattern for face-age categorisation, with information in low and high SFs being diagnostic of child faces, and mid SFs being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important information found in psychophysical studies of face-perception in general (i.e., the eye area, horizontals, and mid-level SFs) is crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real-world challenges., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 Faghel-Soubeyrand, Kloess, Gosselin, Charest and Woodhams.)
- Published
- 2021
- Full Text
- View/download PDF
29. Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations.
- Author
-
Lowe MX, Mohsenzadeh Y, Lahner B, Charest I, Oliva A, and Teng S
- Subjects
- Acoustic Stimulation methods, Auditory Perception physiology, Cochlea, Humans, Magnetic Resonance Imaging methods, Magnetoencephalography methods, Brain Mapping methods, Semantics
- Abstract
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
- Published
- 2021
- Full Text
- View/download PDF
30. Does sleep-dependent consolidation favour weak memories?
- Author
-
Petzka M, Charest I, Balanos GM, and Staresina BP
- Subjects
- Humans, Learning, Sleep, Wakefulness, Memory, Memory Consolidation
- Abstract
Sleep stabilizes newly acquired memories, a process referred to as memory consolidation. According to recent studies, sleep-dependent consolidation processes might be deployed to different extents for different types of memories. In particular, weaker memories might benefit greater from post-learning sleep than stronger memories. However, under standard testing conditions, sleep-dependent consolidation effects for stronger memories might be obscured by ceiling effects. To test this possibility, we devised a new memory paradigm (Memory Arena) in which participants learned temporospatial arrangements of objects. Prior to a delay period spent either awake or asleep, training thresholds were controlled to yield relatively weak or relatively strong memories. After the delay period, retrieval difficulty was controlled via the presence or absence of a retroactive interference task. Under standard testing conditions (no interference), a sleep-dependent consolidation effect was indeed observed for weaker memories only. Critically though, with increased retrieval demands, sleep-dependent consolidation effects were seen for both weaker and stronger memories. These results suggest that all memories are consolidated during sleep, but that memories of different strengths require different testing conditions to unveil their benefit from post-learning sleep., (Copyright © 2020 The Author(s). Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
31. Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision.
- Author
-
Spoerer CJ, Kietzmann TC, Mehrer J, Charest I, and Kriegeskorte N
- Subjects
- Adult, Computational Biology, Female, Humans, Male, Young Adult, Models, Neurological, Neural Networks, Computer, Reaction Time physiology, Vision, Ocular physiology, Visual Perception physiology
- Abstract
Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model's reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2020
- Full Text
- View/download PDF
32. Clinically relevant autistic traits predict greater reliance on detail for image recognition.
- Author
-
Alink A and Charest I
- Subjects
- Adolescent, Autistic Disorder psychology, Female, Humans, Male, Pattern Recognition, Visual physiology, Recognition, Psychology, Vision, Ocular physiology, Visual Perception physiology, Young Adult, Attention physiology, Autism Spectrum Disorder psychology, Cognition physiology
- Abstract
Individuals with an autism spectrum disorder (ASD) diagnosis are often described as having an eye for detail. But it remains to be shown that a detail-focused processing bias is a ubiquitous property of vision in individuals with ASD. To address this question, we investigated whether a greater number of autistic traits in neurotypical subjects is associated with an increased reliance on image details during a natural image recognition task. To this end, we use a novel reverse correlation-based method (feature diagnosticity mapping) for measuring the relative importance of low-level image features for object recognition. The main finding of this study is that image recognition in participants with an above-median number of autistic traits benefited more from the presence of high-spatial frequency image features. Furthermore, we found that this reliance-on-detail effect was best predicted by the presence of the most clinically relevant autistic traits. Therefore, our findings suggest that a greater number of autistic traits in neurotypical individuals is associated with a more detail-oriented visual information processing strategy and that this effect might generalize to a clinical ASD population.
- Published
- 2020
- Full Text
- View/download PDF
33. Alpha/beta power decreases track the fidelity of stimulus-specific information.
- Author
-
Griffiths BJ, Mayhew SD, Mullinger KJ, Jorge J, Charest I, Wimber M, and Hanslmayr S
- Subjects
- Brain Mapping, Cerebral Cortex, Electroencephalography methods, Humans, Magnetic Resonance Imaging methods, Neurons physiology, Visual Cortex physiology, Alpha Rhythm physiology, Auditory Perception physiology, Beta Rhythm physiology, Memory physiology, Visual Perception physiology
- Abstract
Massed synchronised neuronal firing is detrimental to information processing. When networks of task-irrelevant neurons fire in unison, they mask the signal generated by task-critical neurons. On a macroscopic level, such synchronisation can contribute to alpha/beta (8-30 Hz) oscillations. Reducing the amplitude of these oscillations, therefore, may enhance information processing. Here, we test this hypothesis. Twenty-one participants completed an associative memory task while undergoing simultaneous EEG-fMRI recordings. Using representational similarity analysis, we quantified the amount of stimulus-specific information represented within the BOLD signal on every trial. When correlating this metric with concurrently-recorded alpha/beta power, we found a significant negative correlation which indicated that as post-stimulus alpha/beta power decreased, stimulus-specific information increased. Critically, we found this effect in three unique tasks: visual perception, auditory perception, and visual memory retrieval, indicating that this phenomenon transcends both stimulus modality and cognitive task. These results indicate that alpha/beta power decreases parametrically track the fidelity of both externally-presented and internally-generated stimulus-specific information represented within the cortex., Competing Interests: BG, SM, KM, JJ, IC, MW, SH No competing interests declared, (© 2019, Griffiths et al.)
- Published
- 2019
- Full Text
- View/download PDF
34. Conscious perception of natural images is constrained by category-related visual features.
- Author
-
Lindh D, Sligte IG, Assecondi S, Shapiro KL, and Charest I
- Subjects
- Attention physiology, Blinking, Female, Humans, Male, Neural Networks, Computer, Young Adult, Consciousness physiology, Imaging, Three-Dimensional, Visual Cortex physiology, Visual Perception physiology
- Abstract
Conscious perception is crucial for adaptive behaviour yet access to consciousness varies for different types of objects. The visual system comprises regions with widely distributed category information and exemplar-level representations that cluster according to category. Does this categorical organisation in the brain provide insight into object-specific access to consciousness? We address this question using the Attentional Blink approach with visual objects as targets. We find large differences across categories in the attentional blink. We then employ activation patterns extracted from a deep convolutional neural network to reveal that these differences depend on mid- to high-level, rather than low-level, visual features. We further show that these visual features can be used to explain variance in performance across trials. Taken together, our results suggest that the specific organisation of the higher-tier visual system underlies important functions relevant for conscious perception of differing natural images.
- Published
- 2019
- Full Text
- View/download PDF
35. The spatiotemporal neural dynamics underlying perceived similarity for real-world objects.
- Author
-
Cichy RM, Kriegeskorte N, Jozwik KM, van den Bosch JJF, and Charest I
- Subjects
- Adult, Brain Mapping methods, Female, Humans, Male, Pattern Recognition, Visual physiology, Visual Cortex physiology
- Abstract
The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity., (Copyright © 2019 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
36. GLMdenoise improves multivariate pattern analysis of fMRI data.
- Author
-
Charest I, Kriegeskorte N, and Kay KN
- Subjects
- Adult, Cerebral Cortex diagnostic imaging, Humans, Multivariate Analysis, Pattern Recognition, Automated, Psychomotor Performance physiology, Visual Perception physiology, Cerebral Cortex physiology, Functional Neuroimaging methods, Image Interpretation, Computer-Assisted methods, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods
- Abstract
GLMdenoise is a denoising technique for task-based fMRI. In GLMdenoise, estimates of spatially correlated noise (which may be physiological, instrumental, motion-related, or neural in origin) are derived from the data and incorporated as nuisance regressors in a general linear model (GLM) analysis. We previously showed that GLMdenoise outperforms a variety of other denoising techniques in terms of cross-validation accuracy of GLM estimates (Kay et al., 2013a). However, the practical impact of denoising for experimental studies remains unclear. Here we examine whether and to what extent GLMdenoise improves sensitivity in the context of multivariate pattern analysis of fMRI data. On a large number of participants (31 participants across 4 experiments; 3 T, gradient-echo, spatial resolution 2-3.75 mm, temporal resolution 1.3-2 s, number of conditions 32-75), we perform representational similarity analysis (Kriegeskorte et al., 2008a) as well as pattern classification (Haxby et al., 2001). We find that GLMdenoise substantially improves replicability of representational dissimilarity matrices (RDMs) across independent splits of each participant's dataset (average RDM replicability increases from r = 0.46 to r = 0.61). Additionally, we find that GLMdenoise substantially improves pairwise classification accuracy (average classification accuracy increases from 79% correct to 84% correct). We show that GLMdenoise often improves and never degrades performance for individual participants and that GLMdenoise also improves across-participant consistency. We conclude that GLMdenoise is a useful tool that can be routinely used to maximize the amount of information extracted from fMRI activity patterns., (Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
37. Author Correction: Retrieval induces adaptive forgetting of competing memories via cortical pattern suppression.
- Author
-
Wimber M, Alink A, Charest I, Kriegeskorte N, and Anderson MC
- Abstract
In the published version of this article, a detail is missing from the Methods section "Experimental procedure." The following sentence is to be inserted at the end of its fourth paragraph: "If participants failed to respond within 3.5 s, we assumed that they were unable to successfully recognize the item and coded the corresponding trial as an error." The critical behavioral forgetting effect is significant irrespective of whether these timeouts are coded as errors (t
23 = 4.91, P < 0.001) or as missing data (t23 = 3.31, P < 0.01). The original article has not been corrected.- Published
- 2018
- Full Text
- View/download PDF
38. The human voice areas: Spatial organization and inter-individual variability in temporal and extra-temporal cortices.
- Author
-
Pernet CR, McAleer P, Latinus M, Gorgolewski KJ, Charest I, Bestelmeyer PE, Watson RH, Fleming D, Crabbe F, Valdes-Sosa M, and Belin P
- Subjects
- Acoustic Stimulation, Adult, Brain Mapping, Dominance, Cerebral, Female, Humans, Magnetic Resonance Imaging, Male, Voice, Young Adult, Auditory Cortex physiology, Individuality, Speech Perception physiology
- Abstract
fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard 'localizers' available in the visual domain. Here we present an fMRI 'voice localizer' scan allowing rapid and reliable localization of the voice-sensitive 'temporal voice areas' (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or "voice patches" along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download., (Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
39. Retrieval induces adaptive forgetting of competing memories via cortical pattern suppression.
- Author
-
Wimber M, Alink A, Charest I, Kriegeskorte N, and Anderson MC
- Subjects
- Adult, Cues, Female, Hippocampus physiology, Humans, Magnetic Resonance Imaging, Male, Visual Cortex physiology, Young Adult, Adaptation, Psychological physiology, Brain Mapping methods, Inhibition, Psychological, Memory physiology, Mental Recall physiology, Prefrontal Cortex physiology
- Abstract
Remembering a past experience can, surprisingly, cause forgetting. Forgetting arises when other competing traces interfere with retrieval and inhibitory control mechanisms are engaged to suppress the distraction they cause. This form of forgetting is considered to be adaptive because it reduces future interference. The effect of this proposed inhibition process on competing memories has, however, never been observed, as behavioral methods are 'blind' to retrieval dynamics and neuroimaging methods have not isolated retrieval of individual memories. We developed a canonical template tracking method to quantify the activation state of individual target memories and competitors during retrieval. This method revealed that repeatedly retrieving target memories suppressed cortical patterns unique to competitors. Pattern suppression was related to engagement of prefrontal regions that have been implicated in resolving retrieval competition and, critically, predicted later forgetting. Thus, our findings demonstrate a cortical pattern suppression mechanism through which remembering adaptively shapes which aspects of our past remain accessible.
- Published
- 2015
- Full Text
- View/download PDF
40. Unique semantic space in the brain of each beholder predicts perceived similarity.
- Author
-
Charest I, Kievit RA, Schmitz TW, Deca D, and Kriegeskorte N
- Subjects
- Adolescent, Brain anatomy & histology, Brain Mapping methods, Female, Humans, Judgment, Magnetic Resonance Imaging methods, Male, Photic Stimulation, Psychomotor Performance physiology, Reproducibility of Results, Visual Pathways anatomy & histology, Young Adult, Brain physiology, Pattern Recognition, Visual physiology, Semantics, Visual Pathways physiology
- Abstract
The unique way in which each of us perceives the world must arise from our brain representations. If brain imaging could reveal an individual's unique mental representation, it could help us understand the biological substrate of our individual experiential worlds in mental health and disease. However, imaging studies of object vision have focused on commonalities between individuals rather than individual differences and on category averages rather than representations of particular objects. Here we investigate the individually unique component of brain representations of particular objects with functional MRI (fMRI). Subjects were presented with unfamiliar and personally meaningful object images while we measured their brain activity on two separate days. We characterized the representational geometry by the dissimilarity matrix of activity patterns elicited by particular object images. The representational geometry remained stable across scanning days and was unique in each individual in early visual cortex and human inferior temporal cortex (hIT). The hIT representation predicted perceived similarity as reflected in dissimilarity judgments. Importantly, hIT predicted the individually unique component of the judgments when the objects were personally meaningful. Our results suggest that hIT brain representational idiosyncrasies accessible to fMRI are expressed in an individual's perceptual judgments. The unique way each of us perceives the world thus might reflect the individually unique representation in high-level visual areas.
- Published
- 2014
- Full Text
- View/download PDF
41. Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus.
- Author
-
Giordano BL, Pernet C, Charest I, Belizaire G, Zatorre RJ, and Belin P
- Subjects
- Acoustic Stimulation, Adult, Brain Mapping methods, Female, Functional Neuroimaging, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Young Adult, Attention physiology, Auditory Cortex physiology, Auditory Perception physiology, Frontal Lobe physiology, Sound Localization physiology
- Abstract
Identifying sound sources is fundamental to developing a stable representation of the environment in the face of variable auditory information. The cortical processes underlying this ability have received little attention. In two fMRI experiments, we investigated passive adaptation to (Exp. 1) and explicit discrimination of (Exp. 2) source identities for different categories of auditory objects (voices, musical instruments, environmental sounds). All cortical effects of source identity were independent of high-level category information, and were accounted for by sound-to-sound differences in low-level structure (e.g., loudness). A conjunction analysis revealed that the left posterior middle frontal gyrus (pMFG) adapted to identity repetitions during both passive listening and active discrimination tasks. These results indicate that the comparison of sound source identities in a stream of auditory stimulation recruits the pMFG in a domain-general way, i.e., independent of the sound category, based on information contained in the low-level acoustical structure. pMFG recruitment during both passive listening and explicit identity comparison tasks also suggests its automatic engagement in sound source identity processing., (Copyright © 2014 Elsevier Ltd. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
42. People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus.
- Author
-
Watson R, Latinus M, Charest I, Crabbe F, and Belin P
- Subjects
- Adult, Brain Mapping, Face, Female, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Photic Stimulation, Singing, Speech, Voice, Young Adult, Auditory Perception physiology, Recognition, Psychology physiology, Temporal Lobe physiology, Visual Perception physiology
- Abstract
The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face-voice integration. However, the nature of the STS preference for these 'social stimuli' remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., 'people-selective' regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a 'people-selective, heteromodal' region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a 'social-information processing' centre., (Copyright © 2013 Elsevier Ltd. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
43. Binge drinking influences the cerebral processing of vocal affective bursts in young adults.
- Author
-
Maurage P, Bestelmeyer PE, Rouger J, Charest I, and Belin P
- Abstract
Binge drinking is now considered a central public health issue and is associated with emotional and interpersonal problems, but the neural implications of these deficits remain unexplored. The present study aimed at offering the first insights into the effects of binge drinking on the neural processing of vocal affect. On the basis of an alcohol-consumption screening phase (204 students), 24 young adults (12 binge drinkers and 12 matched controls, mean age: 23.8 years) were selected and performed an emotional categorisation task on morphed vocal stimuli (drawn from a morphed fear-anger continuum) during fMRI scanning. In comparison to controls, binge drinkers presented (1) worse behavioural performance in emotional affect categorisation; (2) reduced activation of bilateral superior temporal gyrus; and (3) increased activation of right middle frontal gyrus. These results constitute the first evidence of altered cerebral processing of emotional stimuli in binge drinking and confirm that binge drinking leads to marked cerebral changes, which has important implications for research and clinical practice.
- Published
- 2013
- Full Text
- View/download PDF
44. Cerebral processing of voice gender studied using a continuous carryover FMRI design.
- Author
-
Charest I, Pernet C, Latinus M, Crabbe F, and Belin P
- Subjects
- Acoustic Stimulation, Adult, Brain Mapping, Female, Humans, Image Processing, Computer-Assisted, Linear Models, Male, Oxygen blood, Psychometrics, Reaction Time physiology, Young Adult, Auditory Perception physiology, Cerebral Cortex blood supply, Cerebral Cortex physiology, Magnetic Resonance Imaging, Sex Characteristics, Voice
- Abstract
Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception.
- Published
- 2013
- Full Text
- View/download PDF
45. Vocal attractiveness increases by averaging.
- Author
-
Bruckert L, Bestelmeyer P, Latinus M, Rouger J, Charest I, Rousselet GA, Kawahara H, and Belin P
- Subjects
- Female, Humans, Male, Sex Factors, Speech
- Abstract
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception., (Copyright 2010 Elsevier Ltd. All rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
46. Human cerebral response to animal affective vocalizations.
- Author
-
Belin P, Fecteau S, Charest I, Nicastro N, Hauser MD, and Armony JL
- Subjects
- Analysis of Variance, Animals, Cats, Humans, Macaca mulatta, Magnetic Resonance Imaging, Species Specificity, Auditory Perception physiology, Cerebrum physiology, Vocalization, Animal
- Abstract
It is presently unknown whether our response to affective vocalizations is specific to those generated by humans or more universal, triggered by emotionally matched vocalizations generated by other species. Here, we used functional magnetic resonance imaging in normal participants to measure cerebral activity during auditory stimulation with affectively valenced animal vocalizations, some familiar (cats) and others not (rhesus monkeys). Positively versus negatively valenced vocalizations from cats and monkeys elicited different cerebral responses despite the participants' inability to differentiate the valence of these animal vocalizations by overt behavioural responses. Moreover, the comparison with human non-speech affective vocalizations revealed a common response to the valence in orbitofrontal cortex, a key component on the limbic system. These findings suggest that the neural mechanisms involved in processing human affective vocalizations may be recruited by heterospecific affective vocalizations at an unconscious level, supporting claims of shared emotional systems across species.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.