112 results on '"Schyns PG"'
Search Results
2. Channel surfing in the visual brain
- Author
-
Sowden, PT, Schyns, PG, Sowden, PT, and Schyns, PG
- Abstract
Vision provides us with an ever-changing neural representation of the world from which we must extract stable object categorizations. We argue that visual analysis involves a fundamental interaction between the observer's top-down categorization goals and the incoming stimulation. Specifically, we discuss the information available for categorization from an analysis of different spatial scales by a bank of flexible, interacting spatial-frequency (SF) channels. We contend that the activity of these channels is not determined simply bottom-up by the stimulus. Instead, we argue that, following perceptual learning a specification of the diagnostic, object-based, SF information dynamically influences the top-down processing of retina-based SF information by these channels. Our analysis of SF processing provides a case study that emphasizes the continuity between higher-level cognition and lower-level perception.
- Published
- 2006
3. Retinotopic sensitisation to spatial scale: Evidence for flexible. spatial frequency processing in scene perception
- Author
-
Ozgen, E, Payne, HE, Sowden, PT, Schyns, PG, Ozgen, E, Payne, HE, Sowden, PT, and Schyns, PG
- Published
- 2006
4. From Facial Gesture to Social Judgment: A Psychophysical Approach
- Author
-
Gill, D, primary, Garrod, OGB, additional, Jack, RE, additional, and Schyns, PG, additional
- Published
- 2012
- Full Text
- View/download PDF
5. Automatic (bottom-up) and Strategic (top-down) Extraction of Facial Features over the N170 Event Related Potential
- Author
-
Frei, LS, primary, Smith, ML, additional, and Schyns, PG, additional
- Published
- 2009
- Full Text
- View/download PDF
6. Early Visual Sensitivity to Diagnostic Information During the Processing of Facial Expressions
- Author
-
Petro, LS, primary, Smith, FW, additional, Schyns, PG, additional, and Muckli, L, additional
- Published
- 2009
- Full Text
- View/download PDF
7. Visualizing Internal Representations from Behavioral and Brain Imaging Data
- Author
-
Smith, ML, primary, Lestou, V, additional, Gosselin, F, additional, and Schyns, PG, additional
- Published
- 2009
- Full Text
- View/download PDF
8. Pre-frontal cortex guides dimension-reducing transformations in the occipito-ventral pathway for categorization behaviors.
- Author
-
Duan Y, Zhan J, Gross J, Ince RAA, and Schyns PG
- Subjects
- Humans, Male, Female, Adult, Young Adult, Prefrontal Cortex physiology, Magnetoencephalography, Occipital Lobe physiology
- Abstract
To interpret our surroundings, the brain uses a visual categorization process. Current theories and models suggest that this process comprises a hierarchy of different computations that transforms complex, high-dimensional inputs into lower-dimensional representations (i.e., manifolds) in support of multiple categorization behaviors. Here, we tested this hypothesis by analyzing these transformations reflected in dynamic MEG source activity while individual participants actively categorized the same stimuli according to different tasks: face expression, face gender, pedestrian gender, and vehicle type. Results reveal three transformation stages guided by the pre-frontal cortex. At stage 1 (high-dimensional, 50-120 ms), occipital sources represent both task-relevant and task-irrelevant stimulus features; task-relevant features advance into higher ventral/dorsal regions, whereas task-irrelevant features halt at the occipital-temporal junction. At stage 2 (121-150 ms), stimulus feature representations reduce to lower-dimensional manifolds, which then transform into the task-relevant features underlying categorization behavior over stage 3 (161-350 ms). Our findings shed light on how the brain's network mechanisms transform high-dimensional inputs into specific feature manifolds that support multiple categorization behaviors., Competing Interests: Declaration of interests The authors declare no competing interests., (Copyright © 2024 The Author(s). Published by Elsevier Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
9. Social class perception is driven by stereotype-related facial features.
- Author
-
Bjornsdottir RT, Hensel LB, Zhan J, Garrod OGB, Schyns PG, and Jack RE
- Subjects
- Humans, Social Perception, Attitude, Judgment, Social Class, Facial Expression, Trust, Stereotyping, Facial Recognition
- Abstract
Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
- Published
- 2024
- Full Text
- View/download PDF
10. Cultural facial expressions dynamically convey emotion category and intensity information.
- Author
-
Chen C, Messinger DS, Chen C, Yan H, Duan Y, Ince RAA, Garrod OGB, Schyns PG, and Jack RE
- Subjects
- Humans, Anger, Fear, Happiness, Facial Expression, Emotions
- Abstract
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.
1 , 2 , 3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4 , 5 Humans regularly use facial expressions to communicate such information.6 , 7 , 8 , 9 , 10 , 11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions., Competing Interests: Declaration of interests P.G.S. is on the advisory board of this Current Biology., (Copyright © 2023 The Author(s). Published by Elsevier Inc. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
11. Strength of predicted information content in the brain biases decision behavior.
- Author
-
Yan Y, Zhan J, Garrod O, Cui X, Ince RAA, and Schyns PG
- Subjects
- Humans, Brain Mapping, Photic Stimulation, Brain physiology, Cues
- Abstract
Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.
1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18 , 19 , 20 , 21 , 22 , 23 , 24 -e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception., Competing Interests: Declaration of interests The authors declare no competing interests., (Copyright © 2023 The Author(s). Published by Elsevier Inc. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
12. Network Communications Flexibly Predict Visual Contents That Enhance Representations for Faster Visual Categorization.
- Author
-
Yan Y, Zhan J, Ince RAA, and Schyns PG
- Subjects
- Male, Female, Humans, Occipital Lobe, Brain, Cognition, Photic Stimulation, Visual Perception, Brain Mapping, Magnetic Resonance Imaging
- Abstract
Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant ( N = 11, both sexes). Each was cued to the spatial location (left vs right) and contents [low spatial frequency (LSF) vs high spatial frequency (HSF)] of a predicted Gabor stimulus that they then categorized. Using each participant's concurrently measured MEG, we reconstructed networks that predict and categorize LSF versus HSF contents for behavior. We found that predicted contents flexibly propagate top down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF versus HSF representations of the stimulus, all the way from occipital-ventral-parietal to premotor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e., 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions. SIGNIFICANCE STATEMENT An enduring cognitive hypothesis states that our perception is partly influenced by the bottom-up sensory input but also by top-down expectations. However, cognitive explanations of the dynamic brain networks mechanisms that flexibly predict and categorize the visual input according to task-demands remain elusive. We addressed them in a predictive experimental design by isolating the network communications of cognitive contents from all other communications. Our methods revealed a Prediction Network that flexibly communicates contents from temporal to lateralized occipital cortex, with explicit frontal control, and an occipital-ventral-parietal-frontal Categorization Network that represents more sharply the predicted contents from the shown stimulus, leading to faster behavior. Our framework and results therefore shed a new light of cognitive information processing on dynamic brain activity., (Copyright © 2023 the authors.)
- Published
- 2023
- Full Text
- View/download PDF
13. Stimulus models test hypotheses in brains and DNNs.
- Author
-
Schyns PG, Snoek L, and Daube C
- Subjects
- Humans, Brain, Neural Networks, Computer
- Published
- 2023
- Full Text
- View/download PDF
14. Testing, explaining, and exploring models of facial expressions of emotions.
- Author
-
Snoek L, Jack RE, Schyns PG, Garrod OGB, Mittenbühler M, Chen C, Oosterwijk S, and Scholte HS
- Subjects
- Humans, Face, Movement, Facial Expression, Emotions
- Abstract
Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.
- Published
- 2023
- Full Text
- View/download PDF
15. Degrees of algorithmic equivalence between the brain and its DNN models.
- Author
-
Schyns PG, Snoek L, and Daube C
- Subjects
- Humans, Algorithms, Cognition, Brain physiology, Neural Networks, Computer
- Abstract
Deep neural networks (DNNs) have become powerful and increasingly ubiquitous tools to model human cognition, and often produce similar behaviors. For example, with their hierarchical, brain-inspired organization of computations, DNNs apparently categorize real-world images in the same way as humans do. Does this imply that their categorization algorithms are also similar? We have framed the question with three embedded degrees that progressively constrain algorithmic similarity evaluations: equivalence of (i) behavioral/brain responses, which is current practice, (ii) the stimulus features that are processed to produce these outcomes, which is more constraining, and (iii) the algorithms that process these shared features, the ultimate goal. To improve DNNs as models of cognition, we develop for each degree an increasingly constrained benchmark that specifies the epistemological conditions for the considered equivalence., Competing Interests: Declaration of interests The authors declare no conflicts of interest., (Copyright © 2022 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
16. A Common Neural Account for Social and Nonsocial Decisions.
- Author
-
Arabadzhiyska DH, Garrod OGB, Fouragnan E, De Luca E, Schyns PG, and Philiastides MG
- Subjects
- Young Adult, Male, Female, Humans, Frontal Lobe, Electroencephalography, Decision Making, Magnetic Resonance Imaging
- Abstract
To date, social and nonsocial decisions have been studied largely in isolation. Consequently, the extent to which social and nonsocial forms of decision uncertainty are integrated using shared neurocomputational resources remains elusive. Here, we address this question using simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) in healthy human participants (young adults of both sexes) and a task in which decision evidence in social and nonsocial contexts varies along comparable scales. First, we identify time-resolved build-up of activity in the EEG, akin to a process of evidence accumulation (EA), across both contexts. We then use the endogenous trial-by-trial variability in the slopes of these accumulating signals to construct parametric fMRI predictors. We show that a region of the posterior-medial frontal cortex (pMFC) uniquely explains trial-wise variability in the process of evidence accumulation in both social and nonsocial contexts. We further demonstrate a task-dependent coupling between the pMFC and regions of the human valuation system in dorso-medial and ventro-medial prefrontal cortex across both contexts. Finally, we report domain-specific representations in regions known to encode the early decision evidence for each context. These results are suggestive of a domain-general decision-making architecture, whereupon domain-specific information is likely converted into a "common currency" in medial prefrontal cortex and accumulated for the decision in the pMFC. SIGNIFICANCE STATEMENT Little work has directly compared social-versus-nonsocial decisions to investigate whether they share common neurocomputational origins. Here, using combined electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and computational modeling, we offer a detailed spatiotemporal account of the neural underpinnings of social and nonsocial decisions. Specifically, we identify a comparable mechanism of temporal evidence integration driving both decisions and localize this integration process in posterior-medial frontal cortex (pMFC). We further demonstrate task-dependent coupling between the pMFC and regions of the human valuation system across both contexts. Finally, we report domain-specific representations in regions encoding the early, domain-specific, decision evidence. These results suggest a domain-general decision-making architecture, whereupon domain-specific information is converted into a common representation in the valuation system and integrated for the decision in the pMFC., (Copyright © 2022 Arabadzhiyska et al.)
- Published
- 2022
- Full Text
- View/download PDF
17. Within-participant statistics for cognitive science.
- Author
-
Ince RAA, Kay JW, and Schyns PG
- Subjects
- Humans, Probability, Cognitive Science
- Abstract
Experimental studies in cognitive science typically focus on the population average effect. An alternative is to test each individual participant and then quantify the proportion of the population that would show the effect: the prevalence, or participant replication probability. We argue that this approach has conceptual and practical advantages., Competing Interests: Declaration of interests No interests are declared., (Copyright © 2022 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
18. Different computations over the same inputs produce selective behavior in algorithmic brain networks.
- Author
-
Jaworska K, Yan Y, van Rijsbergen NJ, Ince RAA, and Schyns PG
- Subjects
- Female, Humans, Male, Photic Stimulation, Temporal Lobe physiology, Brain physiology, Brain Mapping methods, Magnetoencephalography methods, Nerve Net physiology, Neuroimaging methods, Visual Perception physiology
- Abstract
A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs. In each task, we found that source-localized MEG activity progresses through four computational stages identified within individual participants: (1) initial contralateral representation of each visual input in occipital cortex, (2) a joint linearly combined representation of both inputs in midline occipital cortex and right fusiform gyrus, followed by (3) nonlinear task-dependent input integration in temporal-parietal cortex, and finally (4) behavioral response representation in postcentral gyrus. We demonstrate the specific dynamics of each computation at the level of individual sources. The spatiotemporal patterns of the first two computations are similar across the three tasks; the last two computations are task specific. Our results therefore reveal where, when, and how dynamic network algorithms perform different computations over the same inputs to produce different behaviors., Competing Interests: KJ, YY, Nv, RI, PS No competing interests declared, (© 2022, Jaworska et al.)
- Published
- 2022
- Full Text
- View/download PDF
19. Facial expressions elicit multiplexed perceptions of emotion categories and dimensions.
- Author
-
Liu M, Duan Y, Ince RAA, Chen C, Garrod OGB, Schyns PG, and Jack RE
- Subjects
- Anger, Arousal, Face, Humans, Emotions, Facial Expression
- Abstract
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,
1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3 ) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities., Competing Interests: Declaration of interests The authors declare no competing interests., (Crown Copyright © 2021. Published by Elsevier Inc. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
20. Bayesian inference of population prevalence.
- Author
-
Ince RA, Paton AT, Kay JW, and Schyns PG
- Subjects
- Bayes Theorem, Data Interpretation, Statistical, Humans, Research Design, Biostatistics, Neurosciences statistics & numerical data, Psychology statistics & numerical data
- Abstract
Within neuroscience, psychology, and neuroimaging, the most frequently used statistical approach is null hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST. This method provides a population-level inference that is currently missing from study designs with small participant numbers, such as in traditional psychophysics and in precision imaging. Bayesian prevalence delivers a quantitative population estimate with associated uncertainty instead of reducing an experiment to a binary inference. Bayesian prevalence is widely applicable to a broad range of studies in neuroscience, psychology, and neuroimaging. Its emphasis on detecting effects within individual participants can also help address replicability issues in these fields., Competing Interests: RI, AP, JK, PS No competing interests declared, (© 2021, Ince et al.)
- Published
- 2021
- Full Text
- View/download PDF
21. Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity.
- Author
-
Daube C, Xu T, Zhan J, Webb A, Ince RAA, Garrod OGB, and Schyns PG
- Abstract
Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed., Competing Interests: The authors declare no competing interests., (© 2021 The Authors.)
- Published
- 2021
- Full Text
- View/download PDF
22. Modeling individual preferences reveals that face beauty is not universally perceived across cultures.
- Author
-
Zhan J, Liu M, Garrod OGB, Daube C, Ince RAA, Jack RE, and Schyns PG
- Subjects
- Adult, Asian People, Female, Humans, Male, Sex Characteristics, White People, Beauty, Culture, Face
- Abstract
Facial attractiveness confers considerable advantages in social interactions,
1 , 2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents., Competing Interests: Declaration of interests The authors declare no competing interests., (Crown Copyright © 2021. Published by Elsevier Inc. All rights reserved.)- Published
- 2021
- Full Text
- View/download PDF
23. Vision: Face-Centered Representations in the Brain.
- Author
-
Schyns PG
- Subjects
- Brain, Corpus Callosum, Humans, Orientation, Spatial, Facial Recognition, Perceptual Distortion
- Abstract
A longstanding debate in the face recognition field concerns the format of face representations in the brain. New face research clarifies some of this mystery by revealing a face-centered format in a patient with a left splenium lesion of the corpus callosum who perceives the right side of faces as 'melted'., (Crown Copyright © 2020. Published by Elsevier Inc. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
24. Revealing the information contents of memory within the stimulus information representation framework.
- Author
-
Schyns PG, Zhan J, Jack RE, and Ince RAA
- Subjects
- Humans, Brain physiology, Cognition physiology, Memory Consolidation physiology
- Abstract
The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where , when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization-stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.
- Published
- 2020
- Full Text
- View/download PDF
25. Healthy aging delays the neural processing of face features relevant for behavior by 40 ms.
- Author
-
Jaworska K, Yi F, Ince RAA, van Rijsbergen NJ, Schyns PG, and Rousselet GA
- Subjects
- Adult, Aged, Aged, 80 and over, Aging psychology, Brain Mapping, Electroencephalography, Female, Humans, Magnetic Resonance Imaging, Male, Mental Processes, Middle Aged, Neural Pathways diagnostic imaging, Neural Pathways physiology, Occipital Lobe diagnostic imaging, Occipital Lobe physiology, Social Interaction, Temporal Lobe diagnostic imaging, Temporal Lobe physiology, Young Adult, Face, Healthy Aging, Reaction Time physiology, Visual Perception physiology
- Abstract
Fast and accurate face processing is critical for everyday social interactions, but it declines and becomes delayed with age, as measured by both neural and behavioral responses. Here, we addressed the critical challenge of understanding how aging changes neural information processing mechanisms to delay behavior. Young (20-36 years) and older (60-86 years) adults performed the basic social interaction task of detecting a face versus noise while we recorded their electroencephalogram (EEG). In each participant, using a new information theoretic framework we reconstructed the features supporting face detection behavior, and also where, when and how EEG activity represents them. We found that occipital-temporal pathway activity dynamically represents the eyes of the face images for behavior ~170 ms poststimulus, with a 40 ms delay in older adults that underlies their 200 ms behavioral deficit of slower reaction times. Our results therefore demonstrate how aging can change neural information processing mechanisms that underlie behavioral slow down., (© 2019 The Authors. Human Brain Mapping published by Wiley Periodicals, Inc.)
- Published
- 2020
- Full Text
- View/download PDF
26. Modelling face memory reveals task-generalizable representations.
- Author
-
Zhan J, Garrod OGB, van Rijsbergen N, and Schyns PG
- Subjects
- Adult, Female, Form Perception, Generalization, Psychological, Humans, Male, Models, Psychological, Young Adult, Face, Facial Recognition, Memory
- Abstract
Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations
1-4 . For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face's similarity to the individual participant's memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms.- Published
- 2019
- Full Text
- View/download PDF
27. Dynamic Construction of Reduced Representations in the Brain for Perceptual Decision Behavior.
- Author
-
Zhan J, Ince RAA, van Rijsbergen N, and Schyns PG
- Subjects
- Humans, Magnetoencephalography, Photic Stimulation, Occipital Lobe physiology, Pattern Recognition, Visual physiology, Temporal Lobe physiology, Visual Perception physiology
- Abstract
Over the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1-14], where visual categorizations unfold over the first 250 ms of processing [15-19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations-e.g. categorizing the same object as "a car" or "a Porsche." While we partly understand where and when these categorizations happen in the occipito-ventral pathway, the next challenge is to unravel how these categorizations happen. That is, how does high-dimensional input collapse in the occipito-ventral pathway to become low dimensional representations that guide behavior? To address this, we investigated what information the brain processes in a visual perception task and visualized the dynamic representation of this information in brain activity. To do so, we developed stimulus information representation (SIR), an information theoretic framework, to tease apart stimulus information that supports behavior from that which does not. We then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using SIR, we demonstrate that a rapid (∼170 ms) reduction of behaviorally irrelevant information occurs in the occipital cortex and that representations of the information that supports distinct behaviors are constructed in the right fusiform gyrus (rFG). Our results thus highlight how SIR can be used to investigate the component processes of the brain by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm., (Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
28. Distinct facial expressions represent pain and pleasure across cultures.
- Author
-
Chen C, Crivelli C, Garrod OGB, Schyns PG, Fernández-Dols JM, and Jack RE
- Subjects
- Adult, Cross-Cultural Comparison, Culture, Facial Expression, Female, Humans, Interpersonal Relations, Male, Recognition, Psychology physiology, Young Adult, Emotions physiology, Face physiology, Pain physiopathology, Pain psychology, Pleasure physiology
- Abstract
Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions., Competing Interests: The authors declare no conflict of interest., (Copyright © 2018 the Author(s). Published by PNAS.)
- Published
- 2018
- Full Text
- View/download PDF
29. Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex.
- Author
-
Park H, Ince RAA, Schyns PG, Thut G, and Gross J
- Subjects
- Acoustic Stimulation, Adolescent, Adult, Auditory Perception, Brain physiology, Brain Mapping methods, Comprehension physiology, Female, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Photic Stimulation, Speech, Visual Perception, Motor Cortex physiology, Speech Perception physiology, Temporal Lobe physiology
- Abstract
Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3-7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior-i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2018
- Full Text
- View/download PDF
30. Object Recognition: Complexity of Recognition Strategies.
- Author
-
Schyns PG
- Subjects
- Animals, Brain, Rats, Neural Networks, Computer, Visual Perception
- Abstract
Primate brains and state-of-the-art convolutional neural networks can recognize many faces, objects and scenes, though how they do so is often mysterious. New research unveils some of the mystery, revealing unexpected complexity in the recognition strategies of rodents., (Copyright © 2018 Elsevier Ltd. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
31. Functional Smiles: Tools for Love, Sympathy, and War.
- Author
-
Rychlowska M, Jack RE, Garrod OGB, Schyns PG, Martin JD, and Niedenthal PM
- Subjects
- Adolescent, Adult, Female, Humans, Male, Young Adult, Interpersonal Relations, Object Attachment, Reward, Smiling psychology, Social Dominance, Social Perception
- Abstract
A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.
- Published
- 2017
- Full Text
- View/download PDF
32. Contributions of local speech encoding and functional connectivity to audio-visual speech perception.
- Author
-
Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, and Kayser C
- Subjects
- Adolescent, Adult, Brain Mapping, Female, Humans, Magnetic Resonance Imaging, Male, Young Adult, Auditory Perception, Frontal Lobe physiology, Speech Perception, Temporal Lobe physiology, Visual Perception
- Abstract
Seeing a speaker's face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker's face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.
- Published
- 2017
- Full Text
- View/download PDF
33. Personal familiarity enhances sensitivity to horizontal structure during processing of face identity.
- Author
-
Pachai MV, Sekuler AB, Bennett PJ, Schyns PG, and Ramon M
- Subjects
- Adult, Face physiology, Female, Humans, Male, Middle Aged, Facial Recognition physiology, Pattern Recognition, Visual physiology, Recognition, Psychology physiology
- Abstract
What makes identification of familiar faces seemingly effortless? Recent studies using unfamiliar face stimuli suggest that selective processing of information conveyed by horizontally oriented spatial frequency components supports accurate performance in a variety of tasks involving matching of facial identity. Here, we studied upright and inverted face discrimination using stimuli with which observers were either unfamiliar or personally familiar (i.e., friends and colleagues). Our results reveal increased sensitivity to horizontal spatial frequency structure in personally familiar faces, further implicating the selective processing of this information in the face processing expertise exhibited by human observers throughout their daily lives.
- Published
- 2017
- Full Text
- View/download PDF
34. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula.
- Author
-
Ince RA, Giordano BL, Kayser C, Rousselet GA, Gross J, and Schyns PG
- Subjects
- Computer Simulation, Electroencephalography, Entropy, Humans, Sensitivity and Specificity, Brain diagnostic imaging, Brain physiology, Brain Mapping, Information Theory, Neuroimaging methods, Normal Distribution
- Abstract
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc., (2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.)
- Published
- 2017
- Full Text
- View/download PDF
35. Toward a Social Psychophysics of Face Communication.
- Author
-
Jack RE and Schyns PG
- Subjects
- Humans, Psychophysics, Facial Expression, Nonverbal Communication psychology, Social Perception
- Abstract
As a highly social species, humans are equipped with a powerful tool for social communication-the face. Although seemingly simple, the human face can elicit multiple social perceptions due to the rich variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional methods. In the past decade, the emerging field of social psychophysics has developed new methods to address this challenge, with the potential to transfer psychophysical laws of social perception to the digital economy via avatars and social robots. At this exciting juncture, it is timely to review these new methodological developments. In this article, we introduce and review the foundational methodological developments of social psychophysics, present work done in the past decade that has advanced understanding of the face as a tool for social communication, and discuss the major challenges that lie ahead.
- Published
- 2017
- Full Text
- View/download PDF
36. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres.
- Author
-
Ince RAA, Jaworska K, Gross J, Panzeri S, van Rijsbergen NJ, Rousselet GA, and Schyns PG
- Abstract
A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features., (© The Author 2016. Published by Oxford University Press.)
- Published
- 2016
- Full Text
- View/download PDF
37. Four not six: Revealing culturally common facial expressions of emotion.
- Author
-
Jack RE, Sun W, Delis I, Garrod OG, and Schyns PG
- Subjects
- Adult, Cross-Cultural Comparison, Female, Humans, Male, Young Adult, Culture, Emotions physiology, Facial Expression, Language
- Abstract
As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record, ((c) 2016 APA, all rights reserved).)
- Published
- 2016
- Full Text
- View/download PDF
38. Space-by-time decomposition for single-trial decoding of M/EEG activity.
- Author
-
Delis I, Onken A, Schyns PG, Panzeri S, and Philiastides MG
- Subjects
- Algorithms, Female, Humans, Male, Reproducibility of Results, Sensitivity and Specificity, Young Adult, Brain Mapping methods, Electroencephalography methods, Image Interpretation, Computer-Assisted methods, Magnetoencephalography methods, Pattern Recognition, Visual physiology, Spatio-Temporal Analysis, Visual Cortex physiology
- Abstract
We develop a novel methodology for the single-trial analysis of multichannel time-varying neuroimaging signals. We introduce the space-by-time M/EEG decomposition, based on Non-negative Matrix Factorization (NMF), which describes single-trial M/EEG signals using a set of non-negative spatial and temporal components that are linearly combined with signed scalar activation coefficients. We illustrate the effectiveness of the proposed approach on an EEG dataset recorded during the performance of a visual categorization task. Our method extracts three temporal and two spatial functional components achieving a compact yet full representation of the underlying structure, which validates and summarizes succinctly results from previous studies. Furthermore, we introduce a decoding analysis that allows determining the distinct functional role of each component and relating them to experimental conditions and task parameters. In particular, we demonstrate that the presented stimulus and the task difficulty of each trial can be reliably decoded using specific combinations of components from the identified space-by-time representation. When comparing with a sliding-window linear discriminant algorithm, we show that our approach yields more robust decoding performance across participants. Overall, our findings suggest that the proposed space-by-time decomposition is a meaningful low-dimensional representation that carries the relevant information of single-trial M/EEG signals., (Copyright © 2016 Elsevier Inc. All rights reserved.)
- Published
- 2016
- Full Text
- View/download PDF
39. Space-by-time manifold representation of dynamic facial expressions for emotion categorization.
- Author
-
Delis I, Chen C, Jack RE, Garrod OG, Panzeri S, and Schyns PG
- Subjects
- Environment, Fear physiology, Female, Happiness, Humans, Male, Young Adult, Emotions physiology, Facial Expression, Movement physiology, Space Perception physiology, Time Perception physiology
- Abstract
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.
- Published
- 2016
- Full Text
- View/download PDF
40. Stimulus features coded by single neurons of a macaque body category selective patch.
- Author
-
Popivanov ID, Schyns PG, and Vogels R
- Subjects
- Animals, Brain Mapping, Functional Neuroimaging, Human Body, Macaca mulatta physiology, Magnetic Resonance Imaging, Male, Pattern Recognition, Visual physiology, Photic Stimulation, Neurons physiology, Temporal Lobe physiology
- Abstract
Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons' responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category).
- Published
- 2016
- Full Text
- View/download PDF
41. Facial Expression Aftereffect Revealed by Adaption to Emotion-Invisible Dynamic Bubbled Faces.
- Author
-
Luo C, Wang Q, Schyns PG, Kingdom FA, and Xu H
- Subjects
- Adaptation, Physiological, Adult, Emotions physiology, Female, Humans, Male, Models, Neurological, Models, Psychological, Neural Pathways physiology, Pattern Recognition, Visual physiology, Visual Perception physiology, Facial Expression, Figural Aftereffect physiology
- Abstract
Visual adaptation is a powerful tool to probe the short-term plasticity of the visual system. Adapting to local features such as the oriented lines can distort our judgment of subsequently presented lines, the tilt aftereffect. The tilt aftereffect is believed to be processed at the low-level of the visual cortex, such as V1. Adaptation to faces, on the other hand, can produce significant aftereffects in high-level traits such as identity, expression, and ethnicity. However, whether face adaptation necessitate awareness of face features is debatable. In the current study, we investigated whether facial expression aftereffects (FEAE) can be generated by partially visible faces. We first generated partially visible faces using the bubbles technique, in which the face was seen through randomly positioned circular apertures, and selected the bubbled faces for which the subjects were unable to identify happy or sad expressions. When the subjects adapted to static displays of these partial faces, no significant FEAE was found. However, when the subjects adapted to a dynamic video display of a series of different partial faces, a significant FEAE was observed. In both conditions, subjects could not identify facial expression in the individual adapting faces. These results suggest that our visual system is able to integrate unrecognizable partial faces over a short period of time and that the integrated percept affects our judgment on subsequently presented faces. We conclude that FEAE can be generated by partial face with little facial expression cues, implying that our cognitive system fills-in the missing parts during adaptation, or the subcortical structures are activated by the bubbled faces without conscious recognition of emotion during adaptation.
- Published
- 2015
- Full Text
- View/download PDF
42. Tracing the Flow of Perceptual Features in an Algorithmic Brain Network.
- Author
-
Ince RA, van Rijsbergen NJ, Thut G, Rousselet GA, Gross J, Panzeri S, and Schyns PG
- Subjects
- Algorithms, Brain Mapping, Humans, Models, Neurological, Brain physiology, Cognition physiology, Nerve Net physiology, Perception physiology
- Abstract
The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory of computation are now deeply rooted. Modern neuroscience aims to model the brain as a network of densely interconnected functional nodes. However, to model the dynamic information processing mechanisms of perception and cognition, it is imperative to understand brain networks at an algorithmic level--i.e. as the information flow that network nodes code and communicate. Here, using innovative methods (Directed Feature Information), we reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture. In each observer, we identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition.
- Published
- 2015
- Full Text
- View/download PDF
43. The Human Face as a Dynamic Tool for Social Communication.
- Author
-
Jack RE and Schyns PG
- Subjects
- Female, Humans, Male, Communication, Face anatomy & histology, Face physiology
- Abstract
As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences - about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digital economy, increasing globalization and cultural integration, understanding precisely which face information supports social communication and which produces misunderstanding is central to the evolving needs of modern society (for example, in the design of socially interactive digital avatars and companion robots). Doing so is challenging, however, because the face can be thought of as comprising a high-dimensional, dynamic information space, and this impacts cognitive science and neuroimaging, and their broader applications in the digital economy. New opportunities to address this challenge are arising from the development of new methods and technologies, coupled with the emergence of a modern scientific culture that embraces cross-disciplinary approaches. Here, we briefly review one such approach that combines state-of-the-art computer graphics, psychophysics and vision science, cultural psychology and social cognition, and highlight the main knowledge advances it has generated. In the light of current developments, we provide a vision of the future directions in the field of human facial communication within and across cultures., (Copyright © 2015 Elsevier Ltd. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
44. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.
- Author
-
Park H, Ince RA, Schyns PG, Thut G, and Gross J
- Subjects
- Acoustic Stimulation, Brain Mapping, Humans, Magnetoencephalography, Speech Perception, Auditory Cortex physiology, Speech
- Abstract
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception., (Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
45. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.
- Author
-
Richoz AR, Jack RE, Garrod OG, Schyns PG, and Caldara R
- Subjects
- Brain Mapping, Cerebral Cortex physiopathology, Discrimination, Psychological, Female, Humans, Male, Middle Aged, Models, Psychological, Prosopagnosia diagnosis, Emotions physiology, Face, Facial Expression, Prosopagnosia psychology, Visual Perception physiology
- Abstract
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation., (Copyright © 2014 Elsevier Ltd. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
46. With age comes representational wisdom in social signals.
- Author
-
van Rijsbergen N, Jaworska K, Rousselet GA, and Schyns PG
- Subjects
- Adolescent, Adult, Aged, Face, Female, Humans, Image Processing, Computer-Assisted, Middle Aged, Nontherapeutic Human Experimentation, Psychological Tests, Young Adult, Aging psychology, Perception physiology
- Abstract
In an increasingly aging society, age has become a foundational dimension of social grouping broadly targeted by advertising and governmental policies. However, perception of old age induces mainly strong negative social biases. To characterize their cognitive and perceptual foundations, we modeled the mental representations of faces associated with three age groups (young age, middle age, and old age), in younger and older participants. We then validated the accuracy of each mental representation of age with independent validators. Using statistical image processing, we identified the features of mental representations that predict perceived age. Here, we show that whereas younger people mentally dichotomize aging into two groups, themselves (younger) and others (older), older participants faithfully represent the features of young age, middle age, and old age, with richer representations of all considered ages. Our results demonstrate that, contrary to popular public belief, older minds depict socially relevant information more accurately than their younger counterparts., (Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
47. Eye coding mechanisms in early human face event-related potentials.
- Author
-
Rousselet GA, Ince RA, van Rijsbergen NJ, and Schyns PG
- Subjects
- Attention, Electroencephalography, Female, Fixation, Ocular physiology, Humans, Male, Reaction Time, Young Adult, Evoked Potentials, Visual physiology, Face, Pattern Recognition, Visual physiology
- Abstract
In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye., (© 2014 ARVO.)
- Published
- 2014
- Full Text
- View/download PDF
48. Facial movements strategically camouflage involuntary social signals of face morphology.
- Author
-
Gill D, Garrod OG, Jack RE, and Schyns PG
- Subjects
- Adolescent, Adult, Female, Humans, Male, Perception, Social Perception, Young Adult, Emotions physiology, Face anatomy & histology, Facial Expression, Sociological Factors
- Abstract
Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.
- Published
- 2014
- Full Text
- View/download PDF
49. Beyond gist: strategic and incremental information accumulation for scene categorization.
- Author
-
Malcolm GL, Nuthmann A, and Schyns PG
- Subjects
- Eye Movements physiology, Fixation, Ocular physiology, Humans, Judgment physiology, Reaction Time physiology, Form Perception physiology, Pattern Recognition, Visual physiology
- Abstract
Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing.
- Published
- 2014
- Full Text
- View/download PDF
50. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.
- Author
-
Jack RE, Garrod OGB, and Schyns PG
- Subjects
- Brain physiology, Humans, Reaction Time, Emotions, Face physiology, Facial Expression
- Abstract
Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four., (Copyright © 2014 Elsevier Ltd. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.