36 results on '"Giordano BL"'
Search Results
2. Bridging auditory perception and natural language processing with semantically informed deep neural networks.
- Author
-
Esposito M, Valente G, Plasencia-Calaña Y, Dumontier M, Giordano BL, and Formisano E
- Subjects
- Humans, Deep Learning, Sound, Auditory Perception physiology, Semantics, Natural Language Processing, Neural Networks, Computer
- Abstract
Sound recognition is effortless for humans but poses a significant challenge for artificial hearing systems. Deep neural networks (DNNs), especially convolutional neural networks (CNNs), have recently surpassed traditional machine learning in sound classification. However, current DNNs map sounds to labels using binary categorical variables, neglecting the semantic relations between labels. Cognitive neuroscience research suggests that human listeners exploit such semantic information besides acoustic cues. Hence, our hypothesis is that incorporating semantic information improves DNN's sound recognition performance, emulating human behaviour. In our approach, sound recognition is framed as a regression problem, with CNNs trained to map spectrograms to continuous semantic representations from NLP models (Word2Vec, BERT, and CLAP text encoder). Two DNN types were trained: semDNN with continuous embeddings and catDNN with categorical labels, both with a dataset extracted from a collection of 388,211 sounds enriched with semantic descriptions. Evaluations across four external datasets, confirmed the superiority of semantic labeling from semDNN compared to catDNN, preserving higher-level relations. Importantly, an analysis of human similarity ratings for natural sounds, showed that semDNN approximated human listener behaviour better than catDNN, other DNNs, and NLP models. Our work contributes to understanding the role of semantics in sound recognition, bridging the gap between artificial systems and human auditory perception., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
3. Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds.
- Author
-
Giordano BL, Esposito M, Valente G, and Formisano E
- Subjects
- Humans, Acoustic Stimulation methods, Acoustics, Magnetic Resonance Imaging, Auditory Perception physiology, Brain Mapping methods, Semantics, Auditory Cortex physiology
- Abstract
Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
4. Decoding sounds depicting hand-object interactions in primary somatosensory cortex.
- Author
-
Bailey KM, Giordano BL, Kaas AL, and Smith FW
- Subjects
- Animals, Touch physiology, Neurons physiology, Magnetic Resonance Imaging, Brain Mapping, Somatosensory Cortex diagnostic imaging, Somatosensory Cortex physiology, Hand
- Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas., (© The Author(s) 2022. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2023
- Full Text
- View/download PDF
5. A national train-the-trainer program to enhance police response to veterans with mental health issues: Primary officer and trainer outcomes and material dissemination.
- Author
-
Weaver CM, Rosenthal J, Giordano BL, Schmidt C, Wickham RE, Mok C, Pettis T, and Clark S
- Subjects
- Humans, Mental Health, Risk Factors, Police, Veterans
- Abstract
Law enforcement officers (LEOs) may play the most important role in directing people in mental health crises into treatment versus incarceration. While most military veterans will never experience a crisis interaction with LEOs, they represent an important at-risk target group for whom to enhance LEO response. The evidence supporting LEO crisis training models includes important limitations that stem from jurisdiction-limited studies, and emphasize LEOs who volunteer for mental health training. The current study reports the primary outcomes of a national (U.S.) large-scale mandated train-the-trainer program to enhance VA LEO response to military veterans with mental health issues. Multidisciplinary teams comprised of VA LEOs, Veterans Justice Outreach Specialists, and mental health professionals (n = 245) were trained in two nested waves. Both trainers and endpoint LEOs (n = 1,284) improved from pretest to posttest on knowledge and skills in identifying psychological services and related treatment referral resources and cross-discipline collaboration, the latter of which showed some retention at 3-month follow-up. The findings support the potential for LEOs mandated to training to improve in important prerequisites to diverting people with mental health issues into care, and away from the criminal justice system. Such results may require professional trainers of LEOs who have themselves received relevant specific training. Potential cautions of such an approach, including inter-team differences and potential for publication bias in extant literature, are also elucidated by the current methodology. The links to all of the collaboratively-developed curriculum materials from the current study are provided for use by qualified professionals. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
- Published
- 2022
- Full Text
- View/download PDF
6. What do we mean with sound semantics, exactly? A survey of taxonomies and ontologies of everyday sounds.
- Author
-
Giordano BL, de Miranda Azevedo R, Plasencia-Calaña Y, Formisano E, and Dumontier M
- Abstract
Taxonomies and ontologies for the characterization of everyday sounds have been developed in several research fields, including auditory cognition, soundscape research, artificial hearing, sound design, and medicine. Here, we surveyed 36 of such knowledge organization systems, which we identified through a systematic literature search. To evaluate the semantic domains covered by these systems within a homogeneous framework, we introduced a comprehensive set of verbal sound descriptors (sound source properties; attributes of sensation; sound signal descriptors; onomatopoeias; music genres), which we used to manually label the surveyed descriptor classes. We reveal that most taxonomies and ontologies were developed to characterize higher-level semantic relations between sound sources in terms of the sound-generating objects and actions involved (what/how), or in terms of the environmental context (where). This indicates the current lack of a comprehensive ontology of everyday sounds that covers simultaneously all semantic aspects of the relation between sounds. Such an ontology may have a wide range of applications and purposes, ranging from extending our scientific knowledge of auditory processes in the real world, to developing artificial hearing systems., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Giordano, de Miranda Azevedo, Plasencia-Calaña, Formisano and Dumontier.)
- Published
- 2022
- Full Text
- View/download PDF
7. A functional magnetic resonance imaging examination of audiovisual observation of a point-light string quartet using intersubject correlation and physical feature analysis.
- Author
-
Lillywhite A, Nijhof D, Glowinski D, Giordano BL, Camurri A, Cross I, and Pollick FE
- Abstract
We use functional Magnetic Resonance Imaging (fMRI) to explore synchronized neural responses between observers of audiovisual presentation of a string quartet performance during free viewing. Audio presentation was accompanied by visual presentation of the string quartet as stick figures observed from a static viewpoint. Brain data from 18 musical novices were obtained during audiovisual presentation of a 116 s performance of the allegro of String Quartet, No. 14 in D minor by Schubert played by the 'Quartetto di Cremona.' These data were analyzed using intersubject correlation (ISC). Results showed extensive ISC in auditory and visual areas as well as parietal cortex, frontal cortex and subcortical areas including the medial geniculate and basal ganglia (putamen). These results from a single fixed viewpoint of multiple musicians are greater than previous reports of ISC from unstructured group activity but are broadly consistent with related research that used ISC to explore listening to music or watching solo dance. A feature analysis examining the relationship between brain activity and physical features of the auditory and visual signals yielded findings of a large proportion of activity related to auditory and visual processing, particularly in the superior temporal gyrus (STG) as well as midbrain areas. Motor areas were also involved, potentially as a result of watching motion from the stick figure display of musicians in the string quartet. These results reveal involvement of areas such as the putamen in processing complex musical performance and highlight the potential of using brief naturalistic stimuli to localize distinct brain areas and elucidate potential mechanisms underlying multisensory integration., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Lillywhite, Nijhof, Glowinski, Giordano, Camurri, Cross and Pollick.)
- Published
- 2022
- Full Text
- View/download PDF
8. Group-level inference of information-based measures for the analyses of cognitive brain networks from neurophysiological data.
- Author
-
Combrisson E, Allegra M, Basanisi R, Ince RAA, Giordano BL, Bastin J, and Brovelli A
- Subjects
- Cognition, Humans, Neuroimaging methods, Reproducibility of Results, Brain physiology, Brain Mapping methods
- Abstract
The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites
1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches., Competing Interests: Declaration of Competing Interest The authors declare that there is no conflict of interests regarding the publication of this paper., (Copyright © 2022 The Author(s). Published by Elsevier Inc. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
9. Functionally homologous representation of vocalizations in the auditory cortex of humans and macaques.
- Author
-
Bodin C, Trapeau R, Nazarian B, Sein J, Degiovanni X, Baurberg J, Rapha E, Renaud L, Giordano BL, and Belin P
- Subjects
- Acoustic Stimulation, Animals, Auditory Perception physiology, Brain Mapping, Humans, Macaca, Magnetic Resonance Imaging, Primates, Vocalization, Animal physiology, Auditory Cortex physiology
- Abstract
How the evolution of speech has transformed the human auditory cortex compared to other primates remains largely unknown. While primary auditory cortex is organized largely similarly in humans and macaques,
1 the picture is much less clear at higher levels of the anterior auditory pathway,2 particularly regarding the processing of conspecific vocalizations (CVs). A "voice region" similar to the human voice-selective areas3 , 4 has been identified in the macaque right anterior temporal lobe with functional MRI;5 however, its anatomical localization, seemingly inconsistent with that of the human temporal voice areas (TVAs), has suggested a "repositioning of the voice area" in recent human evolution.6 Here we report a functional homology in the cerebral processing of vocalizations by macaques and humans, using comparative fMRI and a condition-rich auditory stimulation paradigm. We find that the anterior temporal lobe of both species possesses cortical voice areas that are bilateral and not only prefer conspecific vocalizations but also implement a representational geometry categorizing them apart from all other sounds in a species-specific but homologous manner. These results reveal a more similar functional organization of higher-level auditory cortex in macaques and humans than currently known., Competing Interests: Declaration of interests The authors declare no competing interest., (Copyright © 2021 The Authors. Published by Elsevier Inc. All rights reserved.)- Published
- 2021
- Full Text
- View/download PDF
10. The representational dynamics of perceived voice emotions evolve from categories to dimensions.
- Author
-
Giordano BL, Whiting C, Kriegeskorte N, Kotz SA, Gross J, and Belin P
- Subjects
- Acoustic Stimulation methods, Anger, Humans, Voice physiology, Arousal physiology, Emotions physiology, Speech Perception physiology
- Abstract
Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic-temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions., (© 2021. The Author(s), under exclusive licence to Springer Nature Limited.)
- Published
- 2021
- Full Text
- View/download PDF
11. Gender and Generational Differences in the Internalized Homophobia Questionnaire: An Alignment IRT Analysis.
- Author
-
Wickham RE, Gutierrez R, Giordano BL, Rostosky SS, and Riggle EDB
- Subjects
- Bisexuality, Female, Gender Identity, Humans, Male, Surveys and Questionnaires, United States, Homophobia, Sexual and Gender Minorities
- Abstract
Internalized homophobia (IH) refers to negative attitudes and stereotypes that a lesbian, gay, or bisexual (LGB) person may hold regarding their own sexual identity. Recent sociocultural changes in attitudes and policies affecting LGB people generally reflect broader acceptance of sexual minorities, and may influence the manner in which LGB people experience IH. These experiences should be reflected in the measurement properties of instruments designed to assess IH. This study utilized data from three different samples ( N = 3,522) of LGB individuals residing in the United States to examine the invariance of a common self-report IH measure by gender identity (Female, Male) and age cohort (Boomers, Generation X, and Millennials). Multigroup item response theory-differential item functioning analysis using the alignment method revealed that 6 of the 9 Internalized Homophobia Scale items exhibited differential functioning across gender and generation. Latent scores based on the invariant items suggested that Male and Female Boomers exhibited the lowest level of latent IH, relative to the other cohorts.
- Published
- 2021
- Full Text
- View/download PDF
12. The perception of caricatured emotion in voice.
- Author
-
Whiting CM, Kotz SA, Gross J, Giordano BL, and Belin P
- Subjects
- Anger, Emotions, Facial Expression, Humans, Social Perception, Voice
- Abstract
Affective vocalisations such as screams and laughs can convey strong emotional content without verbal information. Previous research using morphed vocalisations (e.g. 25% fear/75% anger) has revealed categorical perception of emotion in voices, showing sudden shifts at emotion category boundaries. However, it is currently unknown how further modulation of vocalisations beyond the veridical emotion (e.g. 125% fear) affects perception. Caricatured facial expressions produce emotions that are perceived as more intense and distinctive, with faster recognition relative to the original and anti-caricatured (e.g. 75% fear) emotions, but a similar effect using vocal caricatures has not been previously examined. Furthermore, caricatures can play a key role in assessing how distinctiveness is identified, in particular by evaluating accounts of emotion perception with reference to prototypes (distance from the central stimulus) and exemplars (density of the stimulus space). Stimuli consisted of four emotions (anger, disgust, fear, and pleasure) morphed at 25% intervals between a neutral expression and each emotion from 25% to 125%, and between each pair of emotions. Emotion perception was assessed using emotion intensity ratings, valence and arousal ratings, speeded categorisation and paired similarity ratings. We report two key findings: 1) across tasks, there was a strongly linear effect of caricaturing, with caricatured emotions (125%) perceived as higher in emotion intensity and arousal, and recognised faster compared to the original emotion (100%) and anti-caricatures (25%-75%); 2) our results reveal evidence for a unique contribution of a prototype-based account in emotion recognition. We show for the first time that vocal caricature effects are comparable to those found previously with facial caricatures. The set of caricatured vocalisations provided open a promising line of research for investigating vocal affect perception and emotion processing deficits in clinical populations., (Copyright © 2020 The Authors. Published by Elsevier B.V. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
13. Causal Inference in the Multisensory Brain.
- Author
-
Cao Y, Summerfield C, Park H, Giordano BL, and Kayser C
- Subjects
- Acoustic Stimulation, Adult, Bayes Theorem, Female, Humans, Magnetoencephalography, Male, Models, Neurological, Models, Theoretical, Photic Stimulation, Prefrontal Cortex physiology, Young Adult, Auditory Perception physiology, Decision Making physiology, Frontal Lobe physiology, Parietal Lobe physiology, Temporal Lobe physiology, Visual Perception physiology
- Abstract
When combining information across different senses, humans need to flexibly select cues of a common origin while avoiding distraction from irrelevant inputs. The brain could solve this challenge using a hierarchical principle by deriving rapidly a fused sensory estimate for computational expediency and, later and if required, filtering out irrelevant signals based on the inferred sensory cause(s). Analyzing time- and source-resolved human magnetoencephalographic data, we unveil a systematic spatiotemporal cascade of the relevant computations, starting with early segregated unisensory representations, continuing with sensory fusion in parietal-temporal regions, and culminating as causal inference in the frontal lobe. Our results reconcile previous computational accounts of multisensory perception by showing that prefrontal cortex guides flexible integrative behavior based on candidate representations established in sensory and association cortices, thereby framing multisensory integration in the generalized context of adaptive behavior., (Copyright © 2019 Elsevier Inc. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
14. Contributions of local speech encoding and functional connectivity to audio-visual speech perception.
- Author
-
Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, and Kayser C
- Subjects
- Adolescent, Adult, Brain Mapping, Female, Humans, Magnetic Resonance Imaging, Male, Young Adult, Auditory Perception, Frontal Lobe physiology, Speech Perception, Temporal Lobe physiology, Visual Perception
- Abstract
Seeing a speaker's face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker's face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.
- Published
- 2017
- Full Text
- View/download PDF
15. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula.
- Author
-
Ince RA, Giordano BL, Kayser C, Rousselet GA, Gross J, and Schyns PG
- Subjects
- Computer Simulation, Electroencephalography, Entropy, Humans, Sensitivity and Specificity, Brain diagnostic imaging, Brain physiology, Brain Mapping, Information Theory, Neuroimaging methods, Normal Distribution
- Abstract
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc., (2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.)
- Published
- 2017
- Full Text
- View/download PDF
16. Vibrotactile Sensitivity in Active Touch: Effect of Pressing Force.
- Author
-
Papetti S, Jarvelainen H, Giordano BL, Schiesser S, and Frohlich M
- Subjects
- Humans, Mechanical Phenomena, Vibration, Fingers physiology, Sensory Thresholds physiology, Touch physiology
- Abstract
An experiment was conducted to study the effects of force produced by active touch on vibrotactile perceptual thresholds. The task consisted in pressing the fingertip against a flat rigid surface that provided either sinusoidal or broadband vibration. Three force levels were considered, ranging from light touch to hard press. Finger contact areas were measured during the experiment, showing positive correlation with the respective applied forces. Significant effects on thresholds were found for vibration type and force level. Moreover, possibly due to the concurrent effect of large (unconstrained) finger contact areas, active pressing forces, and long duration stimuli, the measured perceptual thresholds are considerably lower than what previously reported in the literature.
- Published
- 2017
- Full Text
- View/download PDF
17. The dominance of haptics over audition in controlling wrist velocity during striking movements.
- Author
-
Cao Y, Giordano BL, Avanzini F, and McAdams S
- Subjects
- Adolescent, Adult, Female, Humans, Male, Physical Stimulation methods, Young Adult, Acoustic Stimulation methods, Auditory Perception physiology, Movement physiology, Wrist physiology
- Abstract
Skilled interactions with sounding objects, such as drumming, rely on resolving the uncertainty in the acoustical and tactual feedback signals generated by vibrating objects. Uncertainty may arise from mis-estimation of the objects' geometry-independent mechanical properties, such as surface stiffness. How multisensory information feeds back into the fine-tuning of sound-generating actions remains unexplored. Participants (percussionists, non-percussion musicians, or non-musicians) held a stylus and learned to control their wrist velocity while repeatedly striking a virtual sounding object whose surface stiffness was under computer control. Sensory feedback was manipulated by perturbing the surface stiffness specified by audition and haptics in a congruent or incongruent manner. The compensatory changes in striking velocity were measured as the motor effects of the sensory perturbations, and sensory dominance was quantified by the asymmetry of congruency effects across audition and haptics. A pronounced dominance of haptics over audition suggested a superior utility of somatosensation developed through long-term experience with object exploration. Large interindividual differences in the motor effects of haptic perturbation potentially arose from a differential reliance on the type of tactual prediction error for which participants tend to compensate: vibrotactile force versus object deformation. Musical experience did not have much of an effect beyond a slightly greater reliance on object deformation in mallet percussionists. The bias toward haptics in the presence of crossmodal perturbations was greater when participants appeared to rely on object deformation feedback, suggesting a weaker association between haptically sensed object deformation and the acoustical structure of concomitant sound during everyday experience of actions upon objects.
- Published
- 2016
- Full Text
- View/download PDF
18. Predicting the timing of dynamic events through sound: Bouncing balls.
- Author
-
Gygi B, Giordano BL, Shafiro V, Kharkhurin A, and Zhang PX
- Subjects
- Acoustic Stimulation, Acoustics, Adolescent, Cues, Female, Humans, Male, Models, Psychological, Motion, Sound Localization, Sports Equipment, Time Factors, Young Adult, Anticipation, Psychological physiology, Pattern Recognition, Physiological physiology, Sound, Time Perception physiology
- Abstract
Dynamic information in acoustical signals produced by bouncing objects is often used by listeners to predict the objects' future behavior (e.g., hitting a ball). This study examined factors that affect the accuracy of motor responses to sounds of real-world dynamic events. In experiment 1, listeners heard 2-5 bounces from a tennis ball, ping-pong, basketball, or wiffle ball, and would tap to indicate the time of the next bounce in a series. Across ball types and number of bounces, listeners were extremely accurate in predicting the correct bounce time (CT) with a mean prediction error of only 2.58% of the CT. Prediction based on a physical model of bouncing events indicated that listeners relied primarily on temporal cues when estimating the timing of the next bounce, and to a lesser extent on the loudness and spectral cues. In experiment 2, the timing of each bounce pattern was altered to correspond to the bounce timing pattern of another ball, producing stimuli with contradictory acoustic cues. Nevertheless, listeners remained highly accurate in their estimates of bounce timing. This suggests that listeners can adopt their estimates of bouncing-object timing based on acoustic cues that provide most veridical information about dynamic aspects of object behavior.
- Published
- 2015
- Full Text
- View/download PDF
19. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.
- Author
-
Giordano BL, Egermann H, and Bresin R
- Subjects
- Adult, Female, Humans, Male, Middle Aged, Auditory Perception, Emotions, Motor Activity, Music, Walking psychology
- Abstract
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
- Published
- 2014
- Full Text
- View/download PDF
20. A language-familiarity effect for speaker discrimination without comprehension.
- Author
-
Fleming D, Giordano BL, Caldara R, and Belin P
- Subjects
- Adult, Female, Humans, Male, Comprehension physiology, Language, Speech Intelligibility physiology, Speech Perception physiology
- Abstract
The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.
- Published
- 2014
- Full Text
- View/download PDF
21. Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus.
- Author
-
Giordano BL, Pernet C, Charest I, Belizaire G, Zatorre RJ, and Belin P
- Subjects
- Acoustic Stimulation, Adult, Brain Mapping methods, Female, Functional Neuroimaging, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Young Adult, Attention physiology, Auditory Cortex physiology, Auditory Perception physiology, Frontal Lobe physiology, Sound Localization physiology
- Abstract
Identifying sound sources is fundamental to developing a stable representation of the environment in the face of variable auditory information. The cortical processes underlying this ability have received little attention. In two fMRI experiments, we investigated passive adaptation to (Exp. 1) and explicit discrimination of (Exp. 2) source identities for different categories of auditory objects (voices, musical instruments, environmental sounds). All cortical effects of source identity were independent of high-level category information, and were accounted for by sound-to-sound differences in low-level structure (e.g., loudness). A conjunction analysis revealed that the left posterior middle frontal gyrus (pMFG) adapted to identity repetitions during both passive listening and active discrimination tasks. These results indicate that the comparison of sound source identities in a stream of auditory stimulation recruits the pMFG in a domain-general way, i.e., independent of the sound category, based on information contained in the low-level acoustical structure. pMFG recruitment during both passive listening and explicit identity comparison tasks also suggests its automatic engagement in sound source identity processing., (Copyright © 2014 Elsevier Ltd. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
22. Gender differences in the temporal voice areas.
- Author
-
Ahrens MM, Awwad Shiekh Hasan B, Giordano BL, and Belin P
- Abstract
There is not only evidence for behavioral differences in voice perception between female and male listeners, but also recent suggestions for differences in neural correlates between genders. The fMRI functional voice localizer (comprising a univariate analysis contrasting stimulation with vocal vs. non-vocal sounds) is known to give robust estimates of the temporal voice areas (TVAs). However, there is growing interest in employing multivariate analysis approaches to fMRI data (e.g., multivariate pattern analysis; MVPA). The aim of the current study was to localize voice-related areas in both female and male listeners and to investigate whether brain maps may differ depending on the gender of the listener. After a univariate analysis, a random effects analysis was performed on female (n = 149) and male (n = 123) listeners and contrasts between them were computed. In addition, MVPA with a whole-brain searchlight approach was implemented and classification maps were entered into a second-level permutation based random effects models using statistical non-parametric mapping (SnPM; Nichols and Holmes, 2002). Gender differences were found only in the MVPA. Identified regions were located in the middle part of the middle temporal gyrus (bilateral) and the middle superior temporal gyrus (right hemisphere). Our results suggest differences in classifier performance between genders in response to the voice localizer with higher classification accuracy from local BOLD signal patterns in several temporal-lobe regions in female listeners.
- Published
- 2014
- Full Text
- View/download PDF
23. Abstract encoding of auditory objects in cortical activity patterns.
- Author
-
Giordano BL, McAdams S, Zatorre RJ, Kriegeskorte N, and Belin P
- Subjects
- Acoustic Stimulation, Adult, Female, Humans, Magnetic Resonance Imaging, Male, Temporal Lobe physiology, Young Adult, Auditory Perception physiology, Cerebral Cortex physiology
- Abstract
The human brain is thought to process auditory objects along a hierarchical temporal "what" stream that progressively abstracts object information from the low-level structure (e.g., loudness) as processing proceeds along the middle-to-anterior direction. Empirical demonstrations of abstract object encoding, independent of low-level structure, have relied on speech stimuli, and non-speech studies of object-category encoding (e.g., human vocalizations) often lack a systematic assessment of low-level information (e.g., vocalizations are highly harmonic). It is currently unknown whether abstract encoding constitutes a general functional principle that operates for auditory objects other than speech. We combined multivariate analyses of functional imaging data with an accurate analysis of the low-level acoustical information to examine the abstract encoding of non-speech categories. We observed abstract encoding of the living and human-action sound categories in the fine-grained spatial distribution of activity in the middle-to-posterior temporal cortex (e.g., planum temporale). Abstract encoding of auditory objects appears to extend to non-speech biological sounds and to operate in regions other than the anterior temporal lobe. Neural processes for the abstract encoding of auditory objects might have facilitated the emergence of speech categories in our ancestors.
- Published
- 2013
- Full Text
- View/download PDF
24. Singing numbers…in cognitive space--a dual-task study of the link between pitch, space, and numbers.
- Author
-
Fischer MH, Riello M, Giordano BL, and Rusconi E
- Subjects
- Adult, Female, Humans, Male, Reaction Time physiology, Young Adult, Cognition physiology, Mathematical Concepts, Pitch Perception physiology, Singing physiology, Space Perception physiology
- Abstract
We assessed the automaticity of spatial-numerical and spatial-musical associations by testing their intentionality and load sensitivity in a dual-task paradigm. In separate sessions, 16 healthy adults performed magnitude and pitch comparisons on sung numbers with variable pitch. Stimuli and response alternatives were identical, but the relevant stimulus attribute (pitch or number) differed between tasks. Concomitant tasks required retention of either color or location information. Results show that spatial associations of both magnitude and pitch are load sensitive and that the spatial association for pitch is more powerful than that for magnitude. These findings argue against the automaticity of spatial mappings in either stimulus dimension., (Copyright © 2013 Cognitive Science Society, Inc.)
- Published
- 2013
- Full Text
- View/download PDF
25. Perceptual evaluation of violins: a quantitative analysis of preference judgments by experienced players.
- Author
-
Saitis C, Giordano BL, Fritz C, and Scavone GP
- Subjects
- Acoustic Stimulation, Acoustics, Adult, Aged, Female, Humans, Male, Middle Aged, Observer Variation, Reproducibility of Results, Sound, Vibration, Young Adult, Auditory Perception, Judgment, Music
- Abstract
The overall goal of the research presented here is to better understand how players evaluate violins within the wider context of finding relationships between measurable vibrational properties of instruments and their perceived qualities. In this study, the reliability of skilled musicians to evaluate the qualities of a violin was examined. In a first experiment, violinists were allowed to freely play a set of different violins and were then asked to rank the instruments by preference. Results showed that players were self-consistent, but a large amount of inter-individual variability was present. A second experiment was then conducted to investigate the origin of inter-individual differences in the preference for violins and to measure the extent to which different attributes of the instrument influence preference. Again, results showed large inter-individual variations in the preference for violins, as well as in assessing various characteristics of the instruments. Despite the significant lack of agreement in preference and the variability in how different criteria are evaluated between individuals, violin players tend to agree on the relevance of sound "richness" and, to a lesser extent, "dynamic range" for determining preference.
- Published
- 2012
- Full Text
- View/download PDF
26. Identification of walked-upon materials in auditory, kinesthetic, haptic, and audio-haptic conditions.
- Author
-
Giordano BL, Visell Y, Yao HY, Hayward V, Cooperstock JR, and McAdams S
- Subjects
- Acoustic Stimulation, Adult, Bias, Construction Materials, Humans, Male, Perceptual Masking physiology, Photic Stimulation, Vibration, Wood, Auditory Perception physiology, Discrimination, Psychological physiology, Touch Perception physiology, Walking physiology
- Abstract
Locomotion generates multisensory information about walked-upon objects. How perceptual systems use such information to get to know the environment remains unexplored. The ability to identify solid (e.g., marble) and aggregate (e.g., gravel) walked-upon materials was investigated in auditory, haptic or audio-haptic conditions, and in a kinesthetic condition where tactile information was perturbed with a vibromechanical noise. Overall, identification performance was better than chance in all experimental conditions and for both solids and the better identified aggregates. Despite large mechanical differences between the response of solids and aggregates to locomotion, for both material categories discrimination was at its worst in the auditory and kinesthetic conditions and at its best in the haptic and audio-haptic conditions. An analysis of the dominance of sensory information in the audio-haptic context supported a focus on the most accurate modality, haptics, but only for the identification of solid materials. When identifying aggregates, response biases appeared to produce a focus on the least accurate modality--kinesthesia. When walking on loose materials such as gravel, individuals do not perceive surfaces by focusing on the most accurate modality, but by focusing on the modality that would most promptly signal postural instabilities.
- Published
- 2012
- Full Text
- View/download PDF
27. The Timbre Toolbox: extracting audio descriptors from musical signals.
- Author
-
Peeters G, Giordano BL, Susini P, Misdariis N, and McAdams S
- Subjects
- Cluster Analysis, Fourier Analysis, Programming Languages, Sound Spectrography, Time Factors, Acoustics, Models, Theoretical, Music, Signal Processing, Computer-Assisted, Software
- Abstract
The analysis of musical signals to extract audio descriptors that can potentially characterize their timbre has been disparate and often too focused on a particular small set of sounds. The Timbre Toolbox provides a comprehensive set of descriptors that can be useful in perceptual research, as well as in music information retrieval and machine-learning approaches to content-based retrieval in large sound databases. Sound events are first analyzed in terms of various input representations (short-term Fourier transform, harmonic sinusoidal components, an auditory model based on the equivalent rectangular bandwidth concept, the energy envelope). A large number of audio descriptors are then derived from each of these representations to capture temporal, spectral, spectrotemporal, and energetic properties of the sound events. Some descriptors are global, providing a single value for the whole sound event, whereas others are time-varying. Robust descriptive statistics are used to characterize the time-varying descriptors. To examine the information redundancy across audio descriptors, correlational analysis followed by hierarchical clustering is performed. This analysis suggests ten classes of relatively independent audio descriptors, showing that the Timbre Toolbox is a multidimensional instrument for the measurement of the acoustical structure of complex sound signals.
- Published
- 2011
- Full Text
- View/download PDF
28. Comparison of Methods for Collecting and Modeling Dissimilarity Data: Applications to Complex Sound Stimuli.
- Author
-
Giordano BL, Guastavino C, Murphy E, Ogg M, Smith BK, and McAdams S
- Abstract
Sorting procedures are frequently adopted as an alternative to dissimilarity ratings to measure the dissimilarity of large sets of stimuli in a comparatively short time. However, systematic empirical research on the consequences of this experiment-design choice is lacking. We carried out a behavioral experiment to assess the extent to which sorting procedures compare to dissimilarity ratings in terms of efficiency, reliability, and accuracy, and the extent to which data from different data-collection methods are redundant and are better fit by different distance models. Participants estimated the dissimilarity of either semantically charged environmental sounds or semantically neutral synthetic sounds. We considered free and hierarchical sorting and derived indications concerning the properties of constrained and truncated hierarchical sorting methods from hierarchical sorting data. Results show that the higher efficiency of sorting methods comes at a considerable cost in terms of data reliability and accuracy. This loss appears to be minimized with truncated hierarchical sorting methods that start from a relatively low number of groups of stimuli. Finally, variations in data-collection method differentially affect the fit of various distance models at the group-average and individual levels. On the basis of these results, we suggest adopting sorting as an alternative to dissimilarity-rating methods only when strictly necessary. We also suggest analyzing the raw behavioral dissimilarities, and avoiding modeling them with one single distance model.
- Published
- 2011
- Full Text
- View/download PDF
29. Vibration influences haptic perception of surface compliance during walking.
- Author
-
Visell Y, Giordano BL, Millet G, and Cooperstock JR
- Subjects
- Adult, Biomechanical Phenomena physiology, Calibration, Compliance, Feedback, Physiological, Humans, Physical Stimulation, Psychomotor Performance, Surface Properties, Perception physiology, Touch physiology, Vibration, Walking physiology
- Abstract
Background: The haptic perception of ground compliance is used for stable regulation of dynamic posture and the control of locomotion in diverse natural environments. Although rarely investigated in relation to walking, vibrotactile sensory channels are known to be active in the discrimination of material properties of objects and surfaces through touch. This study investigated how the perception of ground surface compliance is altered by plantar vibration feedback., Methodology/principal Findings: Subjects walked in shoes over a rigid floor plate that provided plantar vibration feedback, and responded indicating how compliant it felt, either in subjective magnitude or via pairwise comparisons. In one experiment, the compliance of the floor plate was also varied. Results showed that perceived compliance of the plate increased monotonically with vibration feedback intensity, and depended to a lesser extent on the temporal or frequency distribution of the feedback. When both plate stiffness (inverse compliance) and vibration amplitude were manipulated, the effect persisted, with both factors contributing to compliance perception. A significant influence of vibration was observed even for amplitudes close to psychophysical detection thresholds., Conclusions/significance: These findings reveal that vibrotactile sensory channels are highly salient to the perception of surface compliance, and suggest that correlations between vibrotactile sensory information and motor activity may be of broader significance for the control of human locomotion than has been previously acknowledged.
- Published
- 2011
- Full Text
- View/download PDF
30. Perceiving musical individuality: performer identification is dependent on performer expertise and expressiveness, but not on listener expertise.
- Author
-
Gingras B, Lagrandeur-Ponce T, Giordano BL, and McAdams S
- Subjects
- Adult, Concept Formation, Cues, Female, Humans, Male, Pitch Perception, Practice, Psychological, Time Perception, Young Adult, Auditory Perception, Discrimination Learning, Individuality, Music, Nonverbal Communication, Professional Competence, Recognition, Psychology
- Abstract
Can listeners distinguish unfamiliar performers playing the same piece on the same instrument? Professional performers recorded two expressive and two inexpressive interpretations of a short organ piece. Nonmusicians and musicians listened to these recordings and grouped together excerpts they thought had been played by the same performer. Both musicians and nonmusicians performed significantly above chance. Expressive interpretations were sorted more accurately than inexpressive ones, indicating that musical individuality is communicated more efficiently through expressive performances. Furthermore, individual performers' consistency and distinctiveness with respect to expressive patterns were shown to be excellent predictors of categorisation accuracy. Categorisation accuracy was superior for prize-winning performers compared to non-winners, suggesting a link between performer competence and the communication of musical individuality. Finally, results indicate that temporal information is sufficient to enable performer recognition, a finding that has broader implications for research on the detection of identity cues.
- Published
- 2011
- Full Text
- View/download PDF
31. The psychomechanics of simulated sound sources: material properties of impacted thin plates.
- Author
-
McAdams S, Roussarie V, Chaigne A, and Giordano BL
- Subjects
- Acoustic Stimulation, Adult, Elasticity, Equipment Design, Female, Humans, Male, Models, Theoretical, Motion, Pressure, Sound Spectrography, Time Factors, Vibration, Viscosity, Young Adult, Acoustics instrumentation, Auditory Perception, Cues, Psychoacoustics, Sound
- Abstract
Sounds convey information about the materials composing an object. Stimuli were synthesized using a computer model of impacted plates that varied their material properties: viscoelastic and thermoelastic damping and wave velocity (related to elasticity and mass density). The range of damping properties represented a continuum between materials with predominant viscoelastic and thermoelastic damping (glass and aluminum, respectively). The perceptual structure of the sounds was inferred from multidimensional scaling of dissimilarity judgments and from their categorization as glass or aluminum. Dissimilarity ratings revealed dimensions that were closely related to mechanical properties: a wave-velocity-related dimension associated with pitch and a damping-related dimension associated with timbre and duration. When asked to categorize sounds, however, listeners ignored the cues related to wave velocity and focused on cues related to damping. In both dissimilarity-rating and identification experiments, the results were independent of the material of the mallet striking the plate (rubber or wood). Listeners thus appear to select acoustical information that is reliable for a given perceptual task. Because the frequency changes responsible for detecting changes in wave velocity can also be due to changes in geometry, they are not as reliable for material identification as are damping cues.
- Published
- 2010
- Full Text
- View/download PDF
32. When ears drive hands: the influence of contact sound on reaching to grasp.
- Author
-
Castiello U, Giordano BL, Begliomini C, Ansuini C, and Grassi M
- Subjects
- Adult, Biomechanical Phenomena, Female, Fingers physiology, Humans, Male, Photic Stimulation, Ear, Hand physiology, Hand Strength physiology, Sound
- Abstract
Background: Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements., Methodology/principal Findings: We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound., Conclusions/significance: Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action.
- Published
- 2010
- Full Text
- View/download PDF
33. Hearing living symbols and nonliving icons: category specificities in the cognitive processing of environmental sounds.
- Author
-
Giordano BL, McDonnell J, and McAdams S
- Subjects
- Acoustic Stimulation, Classification, Female, Humans, Male, Psycholinguistics, Reference Values, Set, Psychology, Young Adult, Cognition, Language, Pattern Recognition, Physiological, Recognition, Psychology, Sound
- Abstract
The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental representations independent of the acoustic input. In a hierarchical sorting task, we found that evaluation of nonliving sounds is consistently biased toward a focus on acoustical information. However, the evaluation of living sounds focuses spontaneously on sound-independent semantic information, but can rely on acoustical information after exposure to a context consisting of nonliving sounds. We interpret these results as support for a robust iconic processing strategy for nonliving sounds and a flexible symbolic processing strategy for living sounds., (Copyright 2010 Elsevier Inc. All rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
34. Integration of acoustical information in the perception of impacted sound sources: the role of information accuracy and exploitability.
- Author
-
Giordano BL, Rocchesso D, and McAdams S
- Subjects
- Acoustic Stimulation, Acoustics, Adolescent, Adult, Auditory Threshold, Differential Threshold, Discrimination, Psychological, Female, Humans, Male, Middle Aged, Sound, Sound Localization, Young Adult, Auditory Perception
- Abstract
Sound sources are perceived by integrating information from multiple acoustical features. The factors influencing the integration of information are largely unknown. We measured how the perceptual weighting of different features varies with the accuracy of information and with a listener's ability to exploit it. Participants judged the hardness of two objects whose interaction generates an impact sound: a hammer and a sounding object. In a first discrimination experiment, trained listeners focused on the most accurate information, although with greater difficulty when perceiving the hammer. We inferred a limited exploitability for the most accurate hammer-hardness information. In a second rating experiment, listeners focused on the most accurate information only when estimating sounding-object hardness. In a third rating experiment, we synthesized sounds by independently manipulating source properties that covaried in Experiments 1 and 2: sounding-object hardness and impact properties. Sounding-object hardness perception relied on the most accurate acoustical information, whereas impact-properties influenced more strongly hammer hardness perception. Overall, perceptual weight increased with the accuracy of acoustical information, although information that was not easily exploited was perceptually secondary, even if accurate., (Copyright 2010 APA, all rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
35. Spatial representation of pitch height: the SMARC effect.
- Author
-
Rusconi E, Kwan B, Giordano BL, Umiltà C, and Butterworth B
- Subjects
- Adult, Cognition, Female, Humans, Male, Mathematics, Reaction Time, Visual Perception, Pitch Perception, Space Perception
- Abstract
Through the preferential pairing of response positions to pitch, here we show that the internal representation of pitch height is spatial in nature and affects performance, especially in musically trained participants, when response alternatives are either vertically or horizontally aligned. The finding that our cognitive system maps pitch height onto an internal representation of space, which in turn affects motor performance even when this perceptual attribute is irrelevant to the task, extends previous studies on auditory perception and suggests an interesting analogy between music perception and mathematical cognition. Both the basic elements of mathematical cognition (i.e. numbers) and the basic elements of musical cognition (i.e. pitches), appear to be mapped onto a mental spatial representation in a way that affects motor performance.
- Published
- 2006
- Full Text
- View/download PDF
36. Material identification of real impact sounds: effects of size variation in steel, glass, wood, and plexiglass plates.
- Author
-
Giordano BL and McAdams S
- Subjects
- Adult, Female, Humans, Male, Middle Aged, Auditory Perception physiology, Glass, Polymethyl Methacrylate, Sound, Steel, Wood
- Abstract
Identification of the material of struck objects of variable size was investigated. Previous studies on this issue assumed recognition to be based on acoustical measures of damping. This assumption was tested, comparing the power of a damping measure in explaining identification data with that of several other acoustical descriptors. Listeners' performance was perfect with respect to gross material categories (steel-glass and wood-plexiglass) comprising materials of vastly different mechanical properties. Impaired performance was observed for materials within the same gross category, identification being based on the size of the objects alone. The damping descriptor accounted for the identification of the gross categories. However other descriptors such as signal duration explained the results equally well. Materials within the same gross category were identified mainly on the basis of signal frequency. Overall poor support for the relevance of damping to material perception was found. An analysis of the acoustical support for perfect material identification was carried out. Sufficient acoustical information for perfect performance was found. Thus, procedural biases for the origin of the effects of size could be discarded, pointing toward their cognitive, rather than methodological nature. Identification performance was explained in terms of the regularities of the everyday acoustical environment.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.