6 results on '"G. Vinodh Kumar"'
Search Results
2. Author response for 'Biophysical mechanisms governing large‐scale brain network dynamics underlying individual‐specific variability of perception'
- Author
-
Dipanjan Roy, Shrey Dutta, Arpan Banerjee, Siddharth Talwar, and G. Vinodh Kumar
- Subjects
Brain network ,Scale (ratio) ,Computer science ,Dynamics (music) ,Perception ,media_common.quotation_subject ,Cognitive psychology ,media_common - Published
- 2020
3. Empirical Mode Decomposition Algorithms for Classification of Single-Channel EEG Manifesting McGurk Effect
- Author
-
Arpan Banerjee, L. N. Sharma, Dipanjan Roy, Bipra Chatterjee, Arup Kumar Pal, G. Vinodh Kumar, and Cota Navin Gupta
- Subjects
021110 strategic, defence & security studies ,Audio signal ,medicine.diagnostic_test ,business.industry ,Computer science ,Frequency band ,media_common.quotation_subject ,0211 other engineering and technologies ,Mode (statistics) ,Pattern recognition ,02 engineering and technology ,Electroencephalography ,Hilbert–Huang transform ,Random forest ,03 medical and health sciences ,0302 clinical medicine ,Perception ,medicine ,McGurk effect ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,media_common - Abstract
Brain state classification using electroencephalography (EEG) finds applications in both clinical and non-clinical contexts, such as detecting sleep states or perceiving illusory effects during multisensory McGurk paradigm, respectively. Existing literature mostly considers recordings of EEG electrodes that cover the entire head. However, for real world applications, wearable devices that encompass just one (or a few) channels are desirable, which make the classification of EEG states even more challenging. With this as background, we applied variants of data driven Empirical Mode Decomposition (EMD) on McGurk EEG, which is an illusory perception of speech when the movement of lips does not match with the audio signal, for classifying whether the perception is affected by the visual cue or not. After applying a common pre-processing pipeline, we explored four EMD based frameworks to extract EEG features, which were classified using Random Forest. Among the four alternatives, the most effective framework decomposes the ensemble average of two classes of EEG into their respective intrinsic mode functions forming the basis on which the trials were projected to obtain features, which on classification resulted in accuracies of 63.66% using single electrode and 75.85% using three electrodes. The frequency band which plays vital role during audio-visual integration was also studied using traditional band pass filters. Of all, Gamma band was found to be the most prominent followed by alpha and beta bands which contemplates findings from previous studies.
- Published
- 2020
4. Segregation and Integration of Cortical Information Processing Underlying Cross-Modal Perception
- Author
-
Dipanjan Roy, Neeraj Kumar, G. Vinodh Kumar, and Arpan Banerjee
- Subjects
Speech perception ,genetic structures ,Cognitive Neuroscience ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Electroencephalography ,Stimulus (physiology) ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Perception ,medicine ,0501 psychology and cognitive sciences ,Sensory cue ,media_common ,Categorical perception ,Communication ,medicine.diagnostic_test ,Crossmodal ,business.industry ,05 social sciences ,Information processing ,Sensory Systems ,Ophthalmology ,Computer Vision and Pattern Recognition ,Psychology ,business ,Neuroscience ,psychological phenomena and processes ,030217 neurology & neurosurgery - Abstract
Visual cues from the speaker’s face influence the perception of speech. An example of this influence is demonstrated by the McGurk-effect where illusory (cross-modal) sounds are perceived following presentation of incongruent audio–visual (AV) stimuli. Previous studies report the engagement of specific cortical modules that are spatially distributed during cross-modal perception. However, the limits of the underlying representational space and the cortical network mechanisms remain unclear. In this combined psychophysical and electroencephalography (EEG) study, the participants reported their perception while listening to a set of synchronous and asynchronous incongruent AV stimuli. We identified the neural representation of subjective cross-modal perception at different organizational levels — at specific locations in sensor space and at the level of the large-scale brain network estimated from between-sensor interactions. We identified an enhanced positivity in the event-related potential peak around 300 ms following stimulus onset associated with cross-modal perception. At the spectral level, cross-modal perception involved an overall decrease in power at the frontal and temporal regions at multiple frequency bands and at all AV lags, along with an increased power at the occipital scalp region for synchronous AV stimuli. At the level of large-scale neuronal networks, enhanced functional connectivity at the gamma band involving frontal regions serves as a marker of AV integration. Thus, we report in one single study that segregation of information processing at individual brain locations and integration of information over candidate brain networks underlie multisensory speech perception.
- Published
- 2018
5. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
- Author
-
Dipanjan Roy, Amit Kumar Jaiswal, Arpan Banerjee, Tamesh Halder, G. Vinodh Kumar, and Abhishek Mukherjee
- Subjects
Speech perception ,genetic structures ,media_common.quotation_subject ,Speech recognition ,lcsh:BF1-990 ,integration ,Electroencephalography ,perception ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Perception ,medicine ,otorhinolaryngologic diseases ,Psychology ,0501 psychology and cognitive sciences ,EEG ,General Psychology ,media_common ,Original Research ,Categorical perception ,Crossmodal ,medicine.diagnostic_test ,05 social sciences ,functional connectivity ,temporal synchrony ,Coherence (statistics) ,Superior temporal sulcus ,coherence ,multisensory ,lcsh:Psychology ,Neurocomputational speech processing ,AV ,030217 neurology & neurosurgery ,psychological phenomena and processes - Abstract
Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.
- Published
- 2016
6. Graph theoretic network analysis reveals protein pathways underlying cell death following neurotropic viral infection
- Author
-
G. Vinodh Kumar, Anirban Basu, Sourish Ghosh, and Arpan Banerjee
- Subjects
Programmed cell death ,Cell ,Computational biology ,Biology ,Bioinformatics ,Interactome ,Article ,Mice ,Rhabdoviridae Infections ,Protein Interaction Mapping ,medicine ,Animals ,Rank (graph theory) ,Protein Interaction Maps ,FADD ,Neurons ,Multidisciplinary ,Cell Death ,Computational Biology ,Vesiculovirus ,medicine.anatomical_structure ,Order (biology) ,biology.protein ,Signal transduction ,Function (biology) ,Signal Transduction - Abstract
Complex protein networks underlie any cellular function. Certain proteins play a pivotal role in many network configurations, disruption of whose expression proves fatal to the cell. An efficient method to tease out such key proteins in a network is still unavailable. Here, we used graph-theoretic measures on protein-protein interaction data (interactome) to extract biophysically relevant information about individual protein regulation and network properties such as formation of function specific modules (sub-networks) of proteins. We took 5 major proteins that are involved in neuronal apoptosis post Chandipura Virus (CHPV) infection as seed proteins in a database to create a meta-network of immediately interacting proteins (1st order network). Graph theoretic measures were employed to rank the proteins in terms of their connectivity and the degree upto which they can be organized into smaller modules (hubs). We repeated the analysis on 2nd order interactome that includes proteins connected directly with proteins of 1st order. FADD and Casp-3 were connected maximally to other proteins in both analyses, thus indicating their importance in neuronal apoptosis. Thus, our analysis provides a blueprint for the detection and validation of protein networks disrupted by viral infections.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.