19 results on '"Giordano BL"'
Search Results
2. Bridging auditory perception and natural language processing with semantically informed deep neural networks.
- Author
-
Esposito M, Valente G, Plasencia-Calaña Y, Dumontier M, Giordano BL, and Formisano E
- Subjects
- Humans, Deep Learning, Sound, Auditory Perception physiology, Semantics, Natural Language Processing, Neural Networks, Computer
- Abstract
Sound recognition is effortless for humans but poses a significant challenge for artificial hearing systems. Deep neural networks (DNNs), especially convolutional neural networks (CNNs), have recently surpassed traditional machine learning in sound classification. However, current DNNs map sounds to labels using binary categorical variables, neglecting the semantic relations between labels. Cognitive neuroscience research suggests that human listeners exploit such semantic information besides acoustic cues. Hence, our hypothesis is that incorporating semantic information improves DNN's sound recognition performance, emulating human behaviour. In our approach, sound recognition is framed as a regression problem, with CNNs trained to map spectrograms to continuous semantic representations from NLP models (Word2Vec, BERT, and CLAP text encoder). Two DNN types were trained: semDNN with continuous embeddings and catDNN with categorical labels, both with a dataset extracted from a collection of 388,211 sounds enriched with semantic descriptions. Evaluations across four external datasets, confirmed the superiority of semantic labeling from semDNN compared to catDNN, preserving higher-level relations. Importantly, an analysis of human similarity ratings for natural sounds, showed that semDNN approximated human listener behaviour better than catDNN, other DNNs, and NLP models. Our work contributes to understanding the role of semantics in sound recognition, bridging the gap between artificial systems and human auditory perception., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
3. Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds.
- Author
-
Giordano BL, Esposito M, Valente G, and Formisano E
- Subjects
- Humans, Acoustic Stimulation methods, Acoustics, Magnetic Resonance Imaging, Auditory Perception physiology, Brain Mapping methods, Semantics, Auditory Cortex physiology
- Abstract
Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
4. Decoding sounds depicting hand-object interactions in primary somatosensory cortex.
- Author
-
Bailey KM, Giordano BL, Kaas AL, and Smith FW
- Subjects
- Animals, Touch physiology, Neurons physiology, Magnetic Resonance Imaging, Brain Mapping, Somatosensory Cortex diagnostic imaging, Somatosensory Cortex physiology, Hand
- Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas., (© The Author(s) 2022. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2023
- Full Text
- View/download PDF
5. A national train-the-trainer program to enhance police response to veterans with mental health issues: Primary officer and trainer outcomes and material dissemination.
- Author
-
Weaver CM, Rosenthal J, Giordano BL, Schmidt C, Wickham RE, Mok C, Pettis T, and Clark S
- Subjects
- Humans, Mental Health, Risk Factors, Police, Veterans
- Abstract
Law enforcement officers (LEOs) may play the most important role in directing people in mental health crises into treatment versus incarceration. While most military veterans will never experience a crisis interaction with LEOs, they represent an important at-risk target group for whom to enhance LEO response. The evidence supporting LEO crisis training models includes important limitations that stem from jurisdiction-limited studies, and emphasize LEOs who volunteer for mental health training. The current study reports the primary outcomes of a national (U.S.) large-scale mandated train-the-trainer program to enhance VA LEO response to military veterans with mental health issues. Multidisciplinary teams comprised of VA LEOs, Veterans Justice Outreach Specialists, and mental health professionals (n = 245) were trained in two nested waves. Both trainers and endpoint LEOs (n = 1,284) improved from pretest to posttest on knowledge and skills in identifying psychological services and related treatment referral resources and cross-discipline collaboration, the latter of which showed some retention at 3-month follow-up. The findings support the potential for LEOs mandated to training to improve in important prerequisites to diverting people with mental health issues into care, and away from the criminal justice system. Such results may require professional trainers of LEOs who have themselves received relevant specific training. Potential cautions of such an approach, including inter-team differences and potential for publication bias in extant literature, are also elucidated by the current methodology. The links to all of the collaboratively-developed curriculum materials from the current study are provided for use by qualified professionals. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
- Published
- 2022
- Full Text
- View/download PDF
6. What do we mean with sound semantics, exactly? A survey of taxonomies and ontologies of everyday sounds.
- Author
-
Giordano BL, de Miranda Azevedo R, Plasencia-Calaña Y, Formisano E, and Dumontier M
- Abstract
Taxonomies and ontologies for the characterization of everyday sounds have been developed in several research fields, including auditory cognition, soundscape research, artificial hearing, sound design, and medicine. Here, we surveyed 36 of such knowledge organization systems, which we identified through a systematic literature search. To evaluate the semantic domains covered by these systems within a homogeneous framework, we introduced a comprehensive set of verbal sound descriptors (sound source properties; attributes of sensation; sound signal descriptors; onomatopoeias; music genres), which we used to manually label the surveyed descriptor classes. We reveal that most taxonomies and ontologies were developed to characterize higher-level semantic relations between sound sources in terms of the sound-generating objects and actions involved (what/how), or in terms of the environmental context (where). This indicates the current lack of a comprehensive ontology of everyday sounds that covers simultaneously all semantic aspects of the relation between sounds. Such an ontology may have a wide range of applications and purposes, ranging from extending our scientific knowledge of auditory processes in the real world, to developing artificial hearing systems., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Giordano, de Miranda Azevedo, Plasencia-Calaña, Formisano and Dumontier.)
- Published
- 2022
- Full Text
- View/download PDF
7. A functional magnetic resonance imaging examination of audiovisual observation of a point-light string quartet using intersubject correlation and physical feature analysis.
- Author
-
Lillywhite A, Nijhof D, Glowinski D, Giordano BL, Camurri A, Cross I, and Pollick FE
- Abstract
We use functional Magnetic Resonance Imaging (fMRI) to explore synchronized neural responses between observers of audiovisual presentation of a string quartet performance during free viewing. Audio presentation was accompanied by visual presentation of the string quartet as stick figures observed from a static viewpoint. Brain data from 18 musical novices were obtained during audiovisual presentation of a 116 s performance of the allegro of String Quartet, No. 14 in D minor by Schubert played by the 'Quartetto di Cremona.' These data were analyzed using intersubject correlation (ISC). Results showed extensive ISC in auditory and visual areas as well as parietal cortex, frontal cortex and subcortical areas including the medial geniculate and basal ganglia (putamen). These results from a single fixed viewpoint of multiple musicians are greater than previous reports of ISC from unstructured group activity but are broadly consistent with related research that used ISC to explore listening to music or watching solo dance. A feature analysis examining the relationship between brain activity and physical features of the auditory and visual signals yielded findings of a large proportion of activity related to auditory and visual processing, particularly in the superior temporal gyrus (STG) as well as midbrain areas. Motor areas were also involved, potentially as a result of watching motion from the stick figure display of musicians in the string quartet. These results reveal involvement of areas such as the putamen in processing complex musical performance and highlight the potential of using brief naturalistic stimuli to localize distinct brain areas and elucidate potential mechanisms underlying multisensory integration., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Lillywhite, Nijhof, Glowinski, Giordano, Camurri, Cross and Pollick.)
- Published
- 2022
- Full Text
- View/download PDF
8. Group-level inference of information-based measures for the analyses of cognitive brain networks from neurophysiological data.
- Author
-
Combrisson E, Allegra M, Basanisi R, Ince RAA, Giordano BL, Bastin J, and Brovelli A
- Subjects
- Cognition, Humans, Neuroimaging methods, Reproducibility of Results, Brain physiology, Brain Mapping methods
- Abstract
The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites
1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches., Competing Interests: Declaration of Competing Interest The authors declare that there is no conflict of interests regarding the publication of this paper., (Copyright © 2022 The Author(s). Published by Elsevier Inc. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
9. Functionally homologous representation of vocalizations in the auditory cortex of humans and macaques.
- Author
-
Bodin C, Trapeau R, Nazarian B, Sein J, Degiovanni X, Baurberg J, Rapha E, Renaud L, Giordano BL, and Belin P
- Subjects
- Acoustic Stimulation, Animals, Auditory Perception physiology, Brain Mapping, Humans, Macaca, Magnetic Resonance Imaging, Primates, Vocalization, Animal physiology, Auditory Cortex physiology
- Abstract
How the evolution of speech has transformed the human auditory cortex compared to other primates remains largely unknown. While primary auditory cortex is organized largely similarly in humans and macaques,
1 the picture is much less clear at higher levels of the anterior auditory pathway,2 particularly regarding the processing of conspecific vocalizations (CVs). A "voice region" similar to the human voice-selective areas3 , 4 has been identified in the macaque right anterior temporal lobe with functional MRI;5 however, its anatomical localization, seemingly inconsistent with that of the human temporal voice areas (TVAs), has suggested a "repositioning of the voice area" in recent human evolution.6 Here we report a functional homology in the cerebral processing of vocalizations by macaques and humans, using comparative fMRI and a condition-rich auditory stimulation paradigm. We find that the anterior temporal lobe of both species possesses cortical voice areas that are bilateral and not only prefer conspecific vocalizations but also implement a representational geometry categorizing them apart from all other sounds in a species-specific but homologous manner. These results reveal a more similar functional organization of higher-level auditory cortex in macaques and humans than currently known., Competing Interests: Declaration of interests The authors declare no competing interest., (Copyright © 2021 The Authors. Published by Elsevier Inc. All rights reserved.)- Published
- 2021
- Full Text
- View/download PDF
10. The representational dynamics of perceived voice emotions evolve from categories to dimensions.
- Author
-
Giordano BL, Whiting C, Kriegeskorte N, Kotz SA, Gross J, and Belin P
- Subjects
- Acoustic Stimulation methods, Anger, Humans, Voice physiology, Arousal physiology, Emotions physiology, Speech Perception physiology
- Abstract
Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic-temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions., (© 2021. The Author(s), under exclusive licence to Springer Nature Limited.)
- Published
- 2021
- Full Text
- View/download PDF
11. Gender and Generational Differences in the Internalized Homophobia Questionnaire: An Alignment IRT Analysis.
- Author
-
Wickham RE, Gutierrez R, Giordano BL, Rostosky SS, and Riggle EDB
- Subjects
- Bisexuality, Female, Gender Identity, Humans, Male, Surveys and Questionnaires, United States, Homophobia, Sexual and Gender Minorities
- Abstract
Internalized homophobia (IH) refers to negative attitudes and stereotypes that a lesbian, gay, or bisexual (LGB) person may hold regarding their own sexual identity. Recent sociocultural changes in attitudes and policies affecting LGB people generally reflect broader acceptance of sexual minorities, and may influence the manner in which LGB people experience IH. These experiences should be reflected in the measurement properties of instruments designed to assess IH. This study utilized data from three different samples ( N = 3,522) of LGB individuals residing in the United States to examine the invariance of a common self-report IH measure by gender identity (Female, Male) and age cohort (Boomers, Generation X, and Millennials). Multigroup item response theory-differential item functioning analysis using the alignment method revealed that 6 of the 9 Internalized Homophobia Scale items exhibited differential functioning across gender and generation. Latent scores based on the invariant items suggested that Male and Female Boomers exhibited the lowest level of latent IH, relative to the other cohorts.
- Published
- 2021
- Full Text
- View/download PDF
12. The perception of caricatured emotion in voice.
- Author
-
Whiting CM, Kotz SA, Gross J, Giordano BL, and Belin P
- Subjects
- Anger, Emotions, Facial Expression, Humans, Social Perception, Voice
- Abstract
Affective vocalisations such as screams and laughs can convey strong emotional content without verbal information. Previous research using morphed vocalisations (e.g. 25% fear/75% anger) has revealed categorical perception of emotion in voices, showing sudden shifts at emotion category boundaries. However, it is currently unknown how further modulation of vocalisations beyond the veridical emotion (e.g. 125% fear) affects perception. Caricatured facial expressions produce emotions that are perceived as more intense and distinctive, with faster recognition relative to the original and anti-caricatured (e.g. 75% fear) emotions, but a similar effect using vocal caricatures has not been previously examined. Furthermore, caricatures can play a key role in assessing how distinctiveness is identified, in particular by evaluating accounts of emotion perception with reference to prototypes (distance from the central stimulus) and exemplars (density of the stimulus space). Stimuli consisted of four emotions (anger, disgust, fear, and pleasure) morphed at 25% intervals between a neutral expression and each emotion from 25% to 125%, and between each pair of emotions. Emotion perception was assessed using emotion intensity ratings, valence and arousal ratings, speeded categorisation and paired similarity ratings. We report two key findings: 1) across tasks, there was a strongly linear effect of caricaturing, with caricatured emotions (125%) perceived as higher in emotion intensity and arousal, and recognised faster compared to the original emotion (100%) and anti-caricatures (25%-75%); 2) our results reveal evidence for a unique contribution of a prototype-based account in emotion recognition. We show for the first time that vocal caricature effects are comparable to those found previously with facial caricatures. The set of caricatured vocalisations provided open a promising line of research for investigating vocal affect perception and emotion processing deficits in clinical populations., (Copyright © 2020 The Authors. Published by Elsevier B.V. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
13. Causal Inference in the Multisensory Brain.
- Author
-
Cao Y, Summerfield C, Park H, Giordano BL, and Kayser C
- Subjects
- Acoustic Stimulation, Adult, Bayes Theorem, Female, Humans, Magnetoencephalography, Male, Models, Neurological, Models, Theoretical, Photic Stimulation, Prefrontal Cortex physiology, Young Adult, Auditory Perception physiology, Decision Making physiology, Frontal Lobe physiology, Parietal Lobe physiology, Temporal Lobe physiology, Visual Perception physiology
- Abstract
When combining information across different senses, humans need to flexibly select cues of a common origin while avoiding distraction from irrelevant inputs. The brain could solve this challenge using a hierarchical principle by deriving rapidly a fused sensory estimate for computational expediency and, later and if required, filtering out irrelevant signals based on the inferred sensory cause(s). Analyzing time- and source-resolved human magnetoencephalographic data, we unveil a systematic spatiotemporal cascade of the relevant computations, starting with early segregated unisensory representations, continuing with sensory fusion in parietal-temporal regions, and culminating as causal inference in the frontal lobe. Our results reconcile previous computational accounts of multisensory perception by showing that prefrontal cortex guides flexible integrative behavior based on candidate representations established in sensory and association cortices, thereby framing multisensory integration in the generalized context of adaptive behavior., (Copyright © 2019 Elsevier Inc. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
14. Contributions of local speech encoding and functional connectivity to audio-visual speech perception.
- Author
-
Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, and Kayser C
- Subjects
- Adolescent, Adult, Brain Mapping, Female, Humans, Magnetic Resonance Imaging, Male, Young Adult, Auditory Perception, Frontal Lobe physiology, Speech Perception, Temporal Lobe physiology, Visual Perception
- Abstract
Seeing a speaker's face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker's face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.
- Published
- 2017
- Full Text
- View/download PDF
15. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula.
- Author
-
Ince RA, Giordano BL, Kayser C, Rousselet GA, Gross J, and Schyns PG
- Subjects
- Computer Simulation, Electroencephalography, Entropy, Humans, Sensitivity and Specificity, Brain diagnostic imaging, Brain physiology, Brain Mapping, Information Theory, Neuroimaging methods, Normal Distribution
- Abstract
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc., (2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.)
- Published
- 2017
- Full Text
- View/download PDF
16. Vibrotactile Sensitivity in Active Touch: Effect of Pressing Force.
- Author
-
Papetti S, Jarvelainen H, Giordano BL, Schiesser S, and Frohlich M
- Subjects
- Humans, Mechanical Phenomena, Vibration, Fingers physiology, Sensory Thresholds physiology, Touch physiology
- Abstract
An experiment was conducted to study the effects of force produced by active touch on vibrotactile perceptual thresholds. The task consisted in pressing the fingertip against a flat rigid surface that provided either sinusoidal or broadband vibration. Three force levels were considered, ranging from light touch to hard press. Finger contact areas were measured during the experiment, showing positive correlation with the respective applied forces. Significant effects on thresholds were found for vibration type and force level. Moreover, possibly due to the concurrent effect of large (unconstrained) finger contact areas, active pressing forces, and long duration stimuli, the measured perceptual thresholds are considerably lower than what previously reported in the literature.
- Published
- 2017
- Full Text
- View/download PDF
17. The dominance of haptics over audition in controlling wrist velocity during striking movements.
- Author
-
Cao Y, Giordano BL, Avanzini F, and McAdams S
- Subjects
- Adolescent, Adult, Female, Humans, Male, Physical Stimulation methods, Young Adult, Acoustic Stimulation methods, Auditory Perception physiology, Movement physiology, Wrist physiology
- Abstract
Skilled interactions with sounding objects, such as drumming, rely on resolving the uncertainty in the acoustical and tactual feedback signals generated by vibrating objects. Uncertainty may arise from mis-estimation of the objects' geometry-independent mechanical properties, such as surface stiffness. How multisensory information feeds back into the fine-tuning of sound-generating actions remains unexplored. Participants (percussionists, non-percussion musicians, or non-musicians) held a stylus and learned to control their wrist velocity while repeatedly striking a virtual sounding object whose surface stiffness was under computer control. Sensory feedback was manipulated by perturbing the surface stiffness specified by audition and haptics in a congruent or incongruent manner. The compensatory changes in striking velocity were measured as the motor effects of the sensory perturbations, and sensory dominance was quantified by the asymmetry of congruency effects across audition and haptics. A pronounced dominance of haptics over audition suggested a superior utility of somatosensation developed through long-term experience with object exploration. Large interindividual differences in the motor effects of haptic perturbation potentially arose from a differential reliance on the type of tactual prediction error for which participants tend to compensate: vibrotactile force versus object deformation. Musical experience did not have much of an effect beyond a slightly greater reliance on object deformation in mallet percussionists. The bias toward haptics in the presence of crossmodal perturbations was greater when participants appeared to rely on object deformation feedback, suggesting a weaker association between haptically sensed object deformation and the acoustical structure of concomitant sound during everyday experience of actions upon objects.
- Published
- 2016
- Full Text
- View/download PDF
18. Predicting the timing of dynamic events through sound: Bouncing balls.
- Author
-
Gygi B, Giordano BL, Shafiro V, Kharkhurin A, and Zhang PX
- Subjects
- Acoustic Stimulation, Acoustics, Adolescent, Cues, Female, Humans, Male, Models, Psychological, Motion, Sound Localization, Sports Equipment, Time Factors, Young Adult, Anticipation, Psychological physiology, Pattern Recognition, Physiological physiology, Sound, Time Perception physiology
- Abstract
Dynamic information in acoustical signals produced by bouncing objects is often used by listeners to predict the objects' future behavior (e.g., hitting a ball). This study examined factors that affect the accuracy of motor responses to sounds of real-world dynamic events. In experiment 1, listeners heard 2-5 bounces from a tennis ball, ping-pong, basketball, or wiffle ball, and would tap to indicate the time of the next bounce in a series. Across ball types and number of bounces, listeners were extremely accurate in predicting the correct bounce time (CT) with a mean prediction error of only 2.58% of the CT. Prediction based on a physical model of bouncing events indicated that listeners relied primarily on temporal cues when estimating the timing of the next bounce, and to a lesser extent on the loudness and spectral cues. In experiment 2, the timing of each bounce pattern was altered to correspond to the bounce timing pattern of another ball, producing stimuli with contradictory acoustic cues. Nevertheless, listeners remained highly accurate in their estimates of bounce timing. This suggests that listeners can adopt their estimates of bouncing-object timing based on acoustic cues that provide most veridical information about dynamic aspects of object behavior.
- Published
- 2015
- Full Text
- View/download PDF
19. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.
- Author
-
Giordano BL, Egermann H, and Bresin R
- Subjects
- Adult, Female, Humans, Male, Middle Aged, Auditory Perception, Emotions, Motor Activity, Music, Walking psychology
- Abstract
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.