127,827 results on '"Multimodal"'
Search Results
2. Myopic macular pits: a case series with multimodal imaging
- Author
-
Giovanni Greaves, Srinivas R. Sadda, Meira Fogel Levin, David Sarraf, K. Bailey Freund, and Frederic Gunnemann
- Subjects
Multimodal imaging ,medicine.medical_specialty ,genetic structures ,medicine.diagnostic_test ,business.industry ,Chorioretinal atrophy ,Retinal ,General Medicine ,Optical coherence tomography angiography ,Fluorescein angiography ,eye diseases ,Sclera ,Ophthalmology ,chemistry.chemical_compound ,medicine.anatomical_structure ,Optical coherence tomography ,chemistry ,medicine ,Near infrared reflectance ,sense organs ,business - Abstract
Objective To characterize the multimodal retinal findings of myopic macular pits, a feature of myopic degeneration. Methods A case series of patients with myopic macular pits were studied with multimodal imaging including color fundus photography, fundus autofluorescence (FAF), near infrared reflectance (NIR), spectral domain optical coherence tomography (OCT), optical coherence tomography angiography (OCTA), fluorescein angiography (FA) and indocyanine green angiography (ICG). Results Nine eyes of 6 patients with myopic macular pit were examined. Four patients presented with multiple pits and 3 with bilateral involvement. All pits were localized in a region of severe macular chorioretinal atrophy associated with myopic posterior staphyloma. In 3 eyes, the entrance of the posterior ciliary artery through the sclera was noted at the base of the pit. Schisis overlying the pit or adjacent to the pit was identified in 3 patients. Conclusion Myopic macular pits are an additional rare sign of myopic degeneration, developing in regions of posterior staphyloma complicated by severe chorioretinal atrophy and thin sclera.
- Published
- 2023
- Full Text
- View/download PDF
3. Graph Fusion Network-Based Multimodal Learning for Freezing of Gait Detection
- Author
-
Mohammed Bennamoun, Zhiyong Wang, Kun Hu, Kaylena A. Ehgoetz Martens, Ah Chung Tsoi, Markus Hagenbuchner, and Simon J.G. Lewis
- Subjects
Modality (human–computer interaction) ,Modalities ,genetic structures ,Artificial neural network ,Computer Networks and Communications ,business.industry ,Computer science ,Machine learning ,computer.software_genre ,Computer Science Applications ,Multimodal learning ,Gait (human) ,Artificial Intelligence ,Redundancy (engineering) ,Adjacency list ,Artificial intelligence ,business ,Representation (mathematics) ,computer ,Software - Abstract
Freezing of gait (FoG) is identified as a sudden and brief episode of movement cessation despite the intention to continue walking. It is one of the most disabling symptoms of Parkinson's disease (PD) and often leads to falls and injuries. Many computer-aided FoG detection methods have been proposed to use data collected from unimodal sources, such as motion sensors, pressure sensors, and video cameras. However, there are limited efforts of multimodal-based methods to maximize the value of all the information collected from different modalities in clinical assessments and improve the FoG detection performance. Therefore, in this study, a novel end-to-end deep architecture, namely graph fusion neural network (GFN), is proposed for multimodal learning-based FoG detection by combining footstep pressure maps and video recordings. GFN constructs multimodal graphs by treating the encoded features of each modality as vertex-level inputs and measures their adjacency patterns to construct complementary FoG representations, thus reducing the representation redundancy among different modalities. In addition, since GFN is devised to process multimodal graphs of arbitrary structures, it is expected to achieve superior performance with inputs containing missing modalities, compared to the alternative unimodal methods. A multimodal FoG dataset was collected, which included clinical assessment videos and footstep pressure sequences of 340 trials from 20 PD patients. Our proposed GFN demonstrates a great promise of multimodal FoG detection with an area under the curve (AUC) of 0.882. To the best of our knowledge, this is one of the first studies to utilize multimodal learning for automated FoG detection, which offers significant opportunities for better patient assessments and clinical trials in the future.
- Published
- 2023
- Full Text
- View/download PDF
4. Cuticle architecture and mechanical properties: a functional relationship delineated through correlated multimodal imaging
- Author
-
Nicolas Reynoud, Nathalie Geneix, Angelina D’Orlando, Johann Petit, Jeremie Mathurin, Ariane Deniset-Besseau, Didier Marion, Christophe Rothan, Marc Lahaye, Bénédicte Bakan, Unité de recherche sur les Biopolymères, Interactions Assemblages (BIA), Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), BioInformatique et BioStatistiques (BIBS), Centre International de Recherche en Infectiologie (CIRI), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Université Jean Monnet - Saint-Étienne (UJM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Université Jean Monnet - Saint-Étienne (UJM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), Biologie du fruit et pathologie (BFP), Université de Bordeaux (UB)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), Institut de Chimie Physique (ICP), Institut de Chimie du CNRS (INC)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), INRAE and Region Pays de la Loire (Fr)., and ANR-21-CE11-0035,COPLAnAR,COPLAnAR : cartography corrélative pour élucider les relations structure-propriétés de la cuticule des plantes(2021)
- Subjects
hyperspectral ,plant cuticle ,Solanum lycopersicum ,Physiology ,nanomechanical ,AFM PF-QNM ,[SDV.BV]Life Sciences [q-bio]/Vegetal Biology ,correlated multimodal imaging ,Plant Science ,Raman - Abstract
Cuticle are multifunctional hydrophobic biocomposites that protect aerial organs of plants. Along plant development, plant cuticle must accommodate different mechanical constraints combining extensibility and stiffness, the corresponding structure-function relationships are unknown. Recent data showed a fine architectural tuning of the cuticle architecture and the corresponding chemical clusters along fruit development which raise the question of their impact on the mechanical properties of the cuticle.We investigated the in-depth nanomechanical properties of tomato fruit cuticle from early development to ripening, in relation to chemical and structural heterogeneities by developing a correlative multimodal imaging approach.Unprecedented sharps heterogeneities were evidenced with the highlighting of an in-depth mechanical gradient and a ‘soft’ central furrow that were maintained throughout the plant development despite the overall increase in elastic modulus. In addition, we demonstrated that these local mechanical areas are correlated to chemical and structural gradients.This study shed light on a fine tuning of mechanical properties of cuticle through the modulation of their architecture, providing new insight for our understanding of structure-function relationships of plant cuticle and for the design of biosinpired material.
- Published
- 2023
- Full Text
- View/download PDF
5. Multimodal Barometric and Inertial Measurement Unit-Based Tactile Sensor for Robot Control
- Author
-
Uriel Martinez-Hernandez and Gorkem Anil AL
- Subjects
multimodal tactile sensor ,Convolutional neural network (CNN)-based contact recognition ,Sensors ,Gyroscopes ,CNN based contact recognition ,Tactile sensors ,Robot sensing systems ,Accelerometers ,Electrical and Electronic Engineering ,Robots ,Instrumentation ,Force ,robot control - Abstract
In this article, we present a low-cost multimodal tactile sensor capable of providing accelerometer, gyroscope, and pressure data using a seven-axis chip as a sensing element. This approach reduces the complexity of the tactile sensor design and collection of multimodal data. The tactile device is composed of a top layer (a printed circuit board (PCB) and a sensing element), a middle layer (soft rubber material), and a bottom layer (plastic base) forming a sandwich structure. This approach allows the measurement of multimodal data when force is applied to different parts of the top layer of the sensor. The multimodal tactile sensor is validated with analyses and experiments in both offline and real-time. First, the spatial impulse response and sensitivity of the sensor are analyzed with accelerometer, gyroscope, and pressure data systematically collected from the sensor. Second, the estimation of contact location from a range of sensor positions and force values is evaluated using accelerometer and gyroscope data together with a convolutional neural network (CNN) method. Third, the estimation of contact location is used to control the position of a robot arm. The results show that the proposed multimodal tactile sensor has the potential for robotic applications, such as tactile perception for robot control, human-robot interaction, and object exploration.
- Published
- 2023
- Full Text
- View/download PDF
6. In vivo targeting and multimodal imaging of cerebral amyloid-β aggregates using hybrid GdF3 nanoparticles
- Author
-
Frédéric Lerouge, Elodie Ong, Hugo Rositi, Francis Mpambani, Lise-Prune Berner, Radu Bolbos, Cécile Olivier, Françoise Peyrin, Vinu K Apputukan, Cyrille Monnereau, Chantal Andraud, Frederic Chaput, Yves Berthezène, Bettina Braun, Mathias Jucker, Andreas KO Åslund, Sofie Nyström, Per Hammarström, K Peter R Nilsson, Mikael Lindgren, Marlène Wiart, Fabien Chauveau, Stephane Parola, Laboratoire de Chimie - UMR5182 (LC), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS), Cardiovasculaire, métabolisme, diabétologie et nutrition (CarMeN), Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Hospices Civils de Lyon (HCL)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), Hospices Civils de Lyon (HCL), Institut Pascal (IP), Centre National de la Recherche Scientifique (CNRS)-Université Clermont Auvergne (UCA)-Institut national polytechnique Clermont Auvergne (INP Clermont Auvergne), Université Clermont Auvergne (UCA)-Université Clermont Auvergne (UCA), Centre de Recherche en Acquisition et Traitement de l'Image pour la Santé (CREATIS), Université de Lyon-Université de Lyon-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Jean Monnet - Saint-Étienne (UJM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), Centre d'Etude et de Recherche Multimodal Et Pluridisciplinaire en imagerie du vivant (CERMEP - imagerie du vivant), Centre Hospitalier Universitaire de Saint-Etienne [CHU Saint-Etienne] (CHU ST-E)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-CHU Grenoble-Hospices Civils de Lyon (HCL)-Université Jean Monnet - Saint-Étienne (UJM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA), Eberhard Karls Universität Tübingen = Eberhard Karls University of Tuebingen, Linköping University (LIU), Norwegian University of Science and Technology (NTNU), Centre de recherche en neurosciences de Lyon - Lyon Neuroscience Research Center (CRNL), Université de Lyon-Université de Lyon-Université Jean Monnet - Saint-Étienne (UJM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), ANR-15-CE18-0026,NanoBrain,Imagerie de l'inflammation cérébrale dans l'AVC ischémique : développement d'une sonde nanoparticulaire multimodale & méthodes d'imagerie cérébrale(2015), and European Project: 242098,EC:FP7:HEALTH,FP7-HEALTH-2009-single-stage,LUPAS(2009)
- Subjects
[SDV.IB.IMA]Life Sciences [q-bio]/Bioengineering/Imaging ,[SDV.NEU.NB]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]/Neurobiology ,Biomedical Engineering ,Medicine (miscellaneous) ,multimodal imaging ,General Materials Science ,Bioengineering ,luminescent-conjugated polythiophenes ,Development ,gadolinium fluoride nanoparticles ,amyloid-beta - Abstract
Aim: To propose a new multimodal imaging agent targeting amyloid-β (Aβ) plaques in Alzheimer’s disease. Materials & methods: A new generation of hybrid contrast agents, based on gadolinium fluoride nanoparticles grafted with a pentameric luminescent-conjugated polythiophene, was designed, extensively characterized and evaluated in animal models of Alzheimer’s disease through MRI, two-photon microscopy and synchrotron x-ray phase-contrast imaging. Results & conclusion: Two different grafting densities of luminescent-conjugated polythiophene were achieved while preserving colloidal stability and fluorescent properties, and without affecting biodistribution. In vivo brain uptake was dependent on the blood–brain barrier status. Nevertheless, multimodal imaging showed successful Aβ targeting in both transgenic mice and Aβ fibril-injected rats.
- Published
- 2022
- Full Text
- View/download PDF
7. MULTIMODAL IMAGING OF MULTIFOCAL CHOROIDITIS WITH ADAPTIVE OPTICS OPHTHALMOSCOPY
- Author
-
Sohani Amarasekera, Kunal K. Dansingani, K. Bailey Freund, Andrew M. Williams, and Ethan A. Rossi
- Subjects
Multimodal imaging ,Male ,Adult ,Indocyanine Green ,medicine.medical_specialty ,Choroiditis ,medicine.diagnostic_test ,business.industry ,Multifocal Choroiditis ,General Medicine ,Multimodal Imaging ,Multifocal choroiditis ,Ophthalmoscopy ,Bevacizumab ,Ophthalmology ,Young Adult ,medicine ,Humans ,Fluorescein Angiography ,business ,Adaptive optics ,Tomography, Optical Coherence - Abstract
To describe longitudinal, anatomical, and functional alterations caused by inflammatory and neovascular lesions of idiopathic multifocal choroiditis/punctate inner choroidopathy using adaptive optics imaging and microperimetry.Longitudinal case study using multiple imaging modalities, including spectral-domain optical coherence tomography, fluorescein angiography, indocyanine green angiography, optical coherence tomography angiography, flood illumination adaptive optics, and microperimetry.A 21-year-old myopic Asian man presented with blurred vision in the right eye. Clinical examination was notable for an isolated hypopigmented, perifoveal lesion in each eye. Multimodal imaging showed inflammatory lesions in the outer retina, retina pigment epithelium, and inner choroid lesions of both eyes. The right eye additionally exhibited active Type-2 macular neovascularization with loss of cone mosaic regularity that was associated with reduced sensitivity on microperimetry. The clinical picture was consistent with multifocal choroiditis/punctate inner choroidopathy. The patient was treated with oral steroids and three injections of intravitreal bevacizumab in the right eye. After therapy, imaging showed reestablishment of the cone mosaic on flood illumination adaptive optics and improvement in sensitivity on microperimetry.Adaptive optics imaging and microperimetry may detect biomarkers that help to characterize the nature and activity of multifocal choroiditis lesions and to help monitor response to therapy. With timely intervention, structural abnormalities in the outer retina and choroid can be treated, and anatomical improvements precede improvements in visual function.
- Published
- 2023
8. Evaluation of a general model for multimodal unsaturated soil hydraulic properties
- Author
-
Seki, Katsutoshi, Toride, Nobuo, van Genuchten, Martinus Th., Environmental hydrogeology, Hydrogeology, Environmental hydrogeology, and Hydrogeology
- Subjects
Physics - Geophysics ,Fluid Flow and Transfer Processes ,Mechanical Engineering ,Fluid Dynamics (physics.flu-dyn) ,Unsaturated hydraulic conductivity ,FOS: Physical sciences ,General hydraulic conductivity model ,Physics - Fluid Dynamics ,Multimodal hydraulic models ,Water retention ,Geophysics (physics.geo-ph) ,Water Science and Technology - Abstract
Many soils and other porous media exhibit dual- or multi-porosity type features. In a previous study (Seki et al., 2022) we presented multimodal water retention and closed-form hydraulic conductivity equations for such media. The objective of this study is to show that the proposed equations are practically useful. Specifically, dual-BC (Brooks and Corey)-CH (common head) (DBC), dual-VG (van Genuchten)-CH (DVC), and KO (Kosugi)1BC2-CH (KBC) models were evaluated for a broad range of soil types. The three models showed good agreement with measured water retention and hydraulic conductivity data over a wide range of pressure heads. Results were obtained by first optimizing water retention parameters and then optimizing the saturated hydraulic conductivity (K s ) and two parameters (p, q) or (p, r) in the general hydraulic conductivity equation. Although conventionally the tortuosity factor p is optimized and (q, r) fixed, sensitivity analyses showed that optimization of two parameters (p + r, qr) is required for the multimodal models. For 20 soils from the UNSODA database, the average R 2 for log (hydraulic conductivity) was highest (0.985) for the KBC model with r = 1 and optimization of (K s , p, q). This result was almost equivalent (0.973) to the DVC model with q = 1 and optimization of (K s , p, r); both were higher than R 2 for the widely used Peters model (0.956) when optimizing (K s , p, a, ω). The proposed equations are useful for practical applications while mathematically being simple and consistent.
- Published
- 2023
- Full Text
- View/download PDF
9. The construction of interactive and multimodal reading in school—a performative, collaborative and dynamic reading
- Author
-
Ulrika Bodén, Linnéa Stenliden, and Jörgen Nissen
- Subjects
Visual Arts and Performing Arts ,Interactive texts ,multimodal texts ,visual analytics ,visual literacy ,reading ,secondary schools ,socio-material relations ,Communication ,Pedagogical Work ,Pedagogiskt arbete ,Education - Abstract
This study aims to demonstrate how interactions between a Visual Analytics (VA) application and students shape an interactive and multimodal reading practice. VA is a technology offering support with analysing vast amounts of data through visualisations. Such information-rich interactive interfaces provide possibilities for students to gain insights, find correlations, and draw conclusions, but they also generate complexities concerning how to ‘read’ multimodal information on a screen. Inspired by Design-Based Research, interventions were designed and conducted in five social science secondary classrooms. The interactions between the VA application Statistics eXplorer and the students were video captured. A socio-material semiotic approach guides the analyses of how interactions between all actors (the interactive visualisations, the written text, the teachers, students, etc.) produce a reading network. The results show a reading characterised by being performative, collaborative, and dynamic. A combination of visuals and text supports the reading. However, visuals such as colour, highlighting and movement dominantly attract students’ attention, while written text often becomes subordinate and sometimes even ‘invisible’. Hence, this paper argues that it is vital for teachers to didactically support students’ visual reading skills.
- Published
- 2023
- Full Text
- View/download PDF
10. Multimodal Identification by Transcriptomics and Multiscale Bioassays of Active Components in Xuanfeibaidu Formula to Suppress Macrophage-Mediated Immune Response
- Author
-
Hao Liu, Boli Zhang, Yingchao Wang, Shufang Wang, Lu Zhao, Yi Wang, Yiyu Cheng, and Dejin Xun
- Subjects
Inflammation ,Environmental Engineering ,General Computer Science ,Materials Science (miscellaneous) ,General Chemical Engineering ,Multimodal identification ,General Engineering ,Energy Engineering and Power Technology ,Endogeny ,Xuanfeibaidu Formula ,Pharmacology ,Article ,Macrophage migration ,Proinflammatory cytokine ,Transcriptome ,chemistry.chemical_compound ,Immune system ,chemistry ,medicine ,Macrophage ,Macrophage activation ,medicine.symptom ,Kaempferol ,Function (biology) - Abstract
Xuanfeibaidu Formula (XFBD) is a Chinese medicine used in the clinical treatment of coronavirus disease 2019 (COVID-19) patients. Although XFBD has exhibited significant therapeutic efficacy in clinical practice, its underlying pharmacological mechanism remains unclear. Here, we combine a comprehensive research approach that includes network pharmacology, transcriptomics, and bioassays in multiple model systems to investigate the pharmacological mechanism of XFBD and its bioactive substances. High-resolution mass spectrometry was combined with molecular networking to profile the major active substances in XFBD. A total of 154 compounds were identified or tentatively characterized, including flavonoids, terpenes, carboxylic acids, and other types of constituents. Based on the chemical composition of XFBD, a network pharmacology-based analysis identified inflammation-related pathways as primary targets. Thus, we examined the anti-inflammation activity of XFBD in a lipopolysaccharide-induced acute inflammation mice model. XFBD significantly alleviated pulmonary inflammation and decreased the level of serum proinflammatory cytokines. Transcriptomic profiling suggested that genes related to macrophage function were differently expressed after XFBD treatment. Consequently, the effects of XFBD on macrophage activation and mobilization were investigated in a macrophage cell line and a zebrafish wounding model. XFBD exerts strong inhibitory effects on both macrophage activation and migration. Moreover, through multimodal screening, we further identified the major components and compounds from the different herbs of XFBD that mediate its anti-inflammation function. Active components from XFBD, including Polygoni cuspidati Rhizoma, Phragmitis Rhizoma, and Citri grandis Exocarpium rubrum, were then found to strongly downregulate macrophage activation, and polydatin, isoliquiritin, and acteoside were identified as active compounds. Components of Artemisiae annuae Herba and Ephedrae Herba were found to substantially inhibit endogenous macrophage migration, while the presence of ephedrine, atractylenolide, and kaempferol was attributed to these effects. In summary, our study explores the pharmacological mechanism and effective components of XFBD in inflammation regulation via multimodal approaches, and thereby provides a biological illustration of the clinical efficacy of XFBD.
- Published
- 2023
- Full Text
- View/download PDF
11. Multimodal imaging distribution assessment of a liposomal antibiotic in an infectious disease model
- Author
-
Shih-Hsun Cheng, M. Reid Groseclose, Cindy Mininger, Mats Bergstrom, Lily Zhang, Stephen C. Lenhard, Tinamarie Skedzielewski, Zachary D. Kelley, Debra Comroe, Hyundae Hong, Haifeng Cui, Jennifer L. Hoover, Steve Rittenhouse, Stephen Castellino, Beat M. Jucker, and Hasan Alsaid
- Subjects
Mice ,Spectrometry, Mass, Matrix-Assisted Laser Desorption-Ionization ,Positron-Emission Tomography ,Liposomes ,Animals ,Humans ,Pharmaceutical Science ,Tissue Distribution ,Multimodal Imaging ,Lipids ,Communicable Diseases ,Anti-Bacterial Agents - Abstract
Liposomes are promising targeted drug delivery systems with the potential to improve the efficacy and safety profile of certain classes of drugs. Though attractive, there are unique analytical challenges associated with the development of liposomal drugs including human dose prediction given these are multi-component drug delivery systems. In this study, we developed a multimodal imaging approach to provide a comprehensive distribution assessment for an antibacterial drug, GSK2485680, delivered as a liposomal formulation (Lipo680) in a mouse thigh model of bacterial infection to support human dose prediction. Positron emission tomography (PET) imaging was used to track the in vivo biodistribution of Lipo680 over 48 h post-injection providing a clear assessment of the uptake in various tissues and, importantly, the selective accumulation at the site of infection. In addition, a pharmacokinetic model was created to evaluate the kinetics of Lipo680 in different tissues. Matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS) was then used to quantify the distribution of GSK2485680 and to qualitatively assess the distribution of a liposomal lipid throughout sections of infected and non-infected hindlimb tissues at high spatial resolution. Through the combination of both PET and MALDI IMS, we observed excellent correlation between the Lipo680-radionuclide signal detected by PET with the GSK2485680 and lipid component signals detected by MALDI IMS. This multimodal translational method can reduce drug attrition by generating comprehensive biodistribution profiles of drug delivery systems to provide mechanistic insight and elucidate safety concerns. Liposomal formulations have potential to deliver therapeutics across a broad array of different indications, and this work serves as a template to aid in delivering future liposomal drugs to the clinic.
- Published
- 2022
- Full Text
- View/download PDF
12. Disease Progression Score Estimation From Multimodal Imaging and MicroRNA Data Using Supervised Variational Autoencoders
- Author
-
Virgilio Kmetzsch, Emmanuelle Becker, Dario Saracino, Daisy Rinaldi, Agnes Camuzat, Isabelle Le Ber, Olivier Colliot, Algorithms, models and methods for images and signals of the human brain (ARAMIS), Sorbonne Université (SU)-Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut du Cerveau = Paris Brain Institute (ICM), Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Institut du Cerveau = Paris Brain Institute (ICM), Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Sorbonne Université (SU)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Dynamics, Logics and Inference for biological Systems and Sequences (Dyliss), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-GESTION DES DONNÉES ET DE LA CONNAISSANCE (IRISA-D7), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), The research leading to these results has received funding from the French government under management of Agence Nationale de la Recherche, references ANR-19-P3IA-0001 (PRAIRIE 3IA Institute), ANR-10-IAIHU-06, project PREV-DEMALS (grant number ANR-14-CE15-0016-07), and from the Inria Project Lab Program (project Neuromarkers)., ANR-19-P3IA-0001,PRAIRIE,PaRis Artificial Intelligence Research InstitutE(2019), ANR-14-CE15-0016,PREV-DEMALS,Prédire pour prévenir les démences frontotemporales (DFT) et la sclérose latérale amyotrophique (SLA)(2014), Colliot, Olivier, PaRis Artificial Intelligence Research InstitutE - - PRAIRIE2019 - ANR-19-P3IA-0001 - P3IA - VALID, and Appel à projets générique - Prédire pour prévenir les démences frontotemporales (DFT) et la sclérose latérale amyotrophique (SLA) - - PREV-DEMALS2014 - ANR-14-CE15-0016 - Appel à projets générique - VALID
- Subjects
[SDV.IB.IMA]Life Sciences [q-bio]/Bioengineering/Imaging ,[SDV.NEU.NB]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]/Neurobiology ,[SDV.NEU.NB] Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]/Neurobiology ,[INFO.INFO-IM] Computer Science [cs]/Medical Imaging ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Deep learning ,MicroRNA ,Neuroimaging ,Health Informatics ,Variational autoencoder ,Neurodegenerative disease ,Disease progression score ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Computer Science Applications ,[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[SDV.IB.IMA] Life Sciences [q-bio]/Bioengineering/Imaging ,Multimodal data ,[INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV] ,Health Information Management ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Electrical and Electronic Engineering ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; Frontotemporal dementia and amyotrophic lateral sclerosis are rare neurodegenerative diseases with no effective treatment. The development of biomarkers allowing an accurate assessment of disease progression is crucial for evaluating new therapies. Concretely, neuroimaging and transcriptomic (microRNA) data have been shown useful in tracking their progression. However, no single biomarker can accurately measure progression in these complex diseases. Additionally, large samples are not available for such rare disorders. It is thus essential to develop methods that can model disease progression by combining multiple biomarkers from small samples. In this paper, we propose a new framework for computing a disease progression score (DPS) from cross-sectional multimodal data. Specifically, we introduce a supervised multimodal variational autoencoder that can infer a meaningful latent space, where latent representations are placed along a disease trajectory. A score is computed by orthogonal projections onto this path. We evaluate our framework with multiple synthetic datasets and with a real dataset containing 14 patients, 40 presymptomatic genetic mutation carriers and 37 controls from the PREV-DEMALS study. There is no ground truth for the DPS in real-world scenarios, therefore we use the area under the ROC curve (AUC) as a proxy metric. Results with the synthetic datasets support this choice, since the higher the AUC, the more accurate the predicted simulated DPS. Experiments with the real dataset demonstrate better performance in comparison with stateof-the-art approaches. The proposed framework thus leverages cross-sectional multimodal datasets with small sample sizes to objectively measure disease progression, with potential application in clinical trials.
- Published
- 2022
- Full Text
- View/download PDF
13. Role of vitamin C in multimodal analgesia for sleeve gastrectomy: a prospective randomized controlled trail
- Author
-
Mohamed Mamoun MD, Mostafa Abedelkhalek MD, and Mohamed Aboelela MD
- Subjects
Anesthesiology and Pain Medicine ,Critical Care and Intensive Care Medicine - Abstract
Background & Objective: The laparoscopic sleeve gastrectomy is a popular intervention in morbidly obese patient. Postoperative pain relief is a major challenging issue in this procedure with known adverse effects on the respiratory excursion and the ability to cough by the patient. Various analgesic protocols have been tries to conquer this pain. Vitamin C is a micronutrient, water soluble vitamin with antioxidant and antinociceptive actions. We assessed the effect of vitamin C when used as a component of the multimodal analgesic technique in this cohort of the patients. Methodology: After obtaining the ethical committee approval and trial registration, 50 patients scheduled for laparoscopic sleeve gastrectomy were enrolled in this study. The patients were randomly divided into two equal groups according to the study protocol. Group C received vitamin C 500 mg every 8 h for 5 days perioperatively, while Group N received placebo in the same fashion. The surgery was performed under routine general anesthesia. We monitored hemodynamic, VAS, rescue analgesia profile, and gastrointestinal side effects in the immediately after operation, and then at 1, 2, 4, 8, 12, and at 24 h postoperatively. The postoperative morphine consumption was noted as a primary objective of the study. Independent sample T test, Mann-Whitney test, chi square test or Kruskal Wallis test were utilized to detect statistical differences between the studied groups. P < 0.05 was considered significant. Results: Postoperative morphine consumption was lower in Group C than in Group N (16.36 ± 2.37 vs. 20 ± 4.13 mg, P = 0.001). The frequency of morphine was significantly lower in Group C than in Group N (P = 0.009). VAS was lower in Group C than Group N at 4, 8, and 12 h postoperatively (P = 0.02, 0.001, 0.001), while other parameters where comparable. Abbreviation: CBC: complete blood count; IRB: Institutional Research Board; LFTs: liver function tests; LSG: laparoscopic sleeve gastrectomy; MABP: mean arterial blood pressure; NMDA: N-methyl D-aspartate; OSAS: obstructive sleep apnea syndrome; PACTR: Pan African Clinical Trial Registry; PEEP: positive end expiratory pressure.; PFT: pulmonary function tests; RFTs: renal function tests; TFTs: thyroid function test; TOF: train of four; VAS: visual analogue score. Preregistration: Institutional Research Board, Mansoura Faculty of Medicine (IRB # R.20.02.738, Feb 15-2020). Pan African clinical trial registry (PACTR- 202003565796463, date of registration: March 02-2020). Conclusion: Vitamin C via its antioxidant, antinociceptive properties and NMDA-receptor antagonist action can be used as a part of multimodal analgesic techniques. Vitamin C acts as co-analgesic, improves postoperative pain profile, and reduces the needs for other analgesic modalities. Key words: Vitamin C; Analgesia; VAS; Pain; Sleeve Gastrectomy Citation: Aboelelea M, Abedelkhalek M, Mamoun M. Role of vitamin C in multimodal analgesia for sleeve gastrectomy: a prospective randomized controlled trial. Anaesth. pain intensive care 2023;27(1):82−88. DOI: 10.35975/apic.v27i1.2110 Received: October 02, 2022; Reviewed: October 25, 2022; Accepted: December 09, 2022
- Published
- 2023
- Full Text
- View/download PDF
14. Multimodal AutoML via Representation Evolution
- Author
-
Blaž Škrlj, Matej Bevec, and Nada Lavrač
- Subjects
AutoML ,representation learning ,evolution ,multimodal learning - Abstract
With the increasing amounts of available data, learning simultaneously from different types of inputs is becoming necessary to obtain robust and well-performing models. With the advent of representation learning in recent years, lower-dimensional vector-based representations have become available for both images and texts, while automating simultaneous learning from multiple modalities remains a challenging problem. This paper presents an AutoML (automated machine learning) approach to automated machine learning model configuration identification for data composed of two modalities: texts and images. The approach is based on the idea of representation evolution, the process of automatically amplifying heterogeneous representations across several modalities, optimized jointly with a collection of fast, well-regularized linear models. The proposed approach is benchmarked against 11 unimodal and multimodal (texts and images) approaches on four real-life benchmark datasets from different domains. It achieves competitive performance with minimal human effort and low computing requirements, enabling learning from multiple modalities in automated manner for a wider community of researchers.
- Published
- 2022
- Full Text
- View/download PDF
15. Multimodal Imaging of Pigmented Paravenous Retinochoroidal Atrophy in a Pediatric Patient with Cystoid Macular Edema
- Author
-
Cumali Değirmenci and JALE MENTEŞ
- Subjects
Inflammation ,child ,macular edema ,pigmented paravenous retinochoroidal atrophy ,multimodal imaging ,fluorescence angiography ,optical coherence tomography angiography ,Ophthalmology ,female ,atrophy ,case report ,Humans ,human ,Fluorescein Angiography ,Cystoid macular edema - Abstract
The aim of this case report is to present the multimodal imaging characteristics of pigmented paravenous retinochoroidal atrophy (PPRCA) in a pediatric patient with cystoid macular edema (CME). A 7-year-old girl was admitted to our clinic with complaints of mild blurred vision and poor night vision. Best corrected visual acuity was 10/10 in both eyes. Fundus examination showed atrophic areas around the optic nerve and along the retinal vessels in both eyes. A few small dot-shaped paravenous pigmentations were observed in the mid-peripheral retina. Fundus autofluorescence was consistent with PPRCA. Spectral-domain optical coherence tomography (OCT) revealed the presence of CME and loss of the outer retinal layers outside the macula, with intact retinal layers in the macula. OCT angiography revealed normal choriocapillaris vasculature and flow. The patient was followed up for 6 months but showed no change in CME or clinical appearance. CME without ocular inflammation is an unusual finding of PPRCA and may suggest the involvement of chronic or latent inflammation in the etiology of PPRCA. © 2022 by Turkish Ophthalmological Association.
- Published
- 2022
- Full Text
- View/download PDF
16. Reliability in the identification of metaphors in (filmic) multimodal communication
- Author
-
Marianna Bolognesi, Lorena Bort-Mir, Bort-Mir L., and Bolognesi M.
- Subjects
Linguistics and Language ,Communication ,FILMIP ,multimodal communication ,multimodal metaphors ,metaphor identification - Abstract
Research on multimodal communication is complex because multimodal analyses require methods and procedures that offer the possibility of disentangling the layers of meaning conveyed through different channels. We hereby propose an empirical evaluation of the Filmic Metaphor Identification Procedure (FILMIP, Bort-Mir, L. (2019). Developing, applying and testing FILMIP: the filmic metaphor identification procedure, Ph.D. dissertation. Universitat Jaume I, Castellón.), a structural method for the identification of metaphorical elements in (filmic) multimodal materials. The paper comprises two studies: (i) A content analysis conducted by independent coders, in which the reliability of FILMIP is assessed. Here, two TV commercials were shown to 21 Spanish participants for later analysis with the use of FILMIP under two questionnaires. (ii) A qualitative analysis based on a percentage agreement index to check agreement among the 21 participants about the metaphorically marked filmic components identified on the basis of FILMIP’s seven steps. The results of the two studies show that FILMIP is a valid and reliable tool for the identification of metaphorical elements in (filmic) multimodal materials. The empirical findings are discussed in relation to multimodal communication open challenges.
- Published
- 2022
- Full Text
- View/download PDF
17. Toward a General Framework for Multimodal Big Data Analysis
- Author
-
Valerio Bellandi, Paolo Ceravolo, Samira Maghool, and Stefano Siccardi
- Subjects
Big Data ,Data Analysis ,Machine Learning ,data fusion ,Electronic Data Processing ,multimodal analysis ,Information Systems and Management ,Settore INF/01 - Informatica ,Information Storage and Retrieval ,big graph ,Computer Science Applications ,Information Systems - Abstract
Multimodal Analytics in Big Data architectures implies compounded configurations of the data processing tasks. Each modality in data requires specific analytics that triggers specific data processing tasks. Scalability can be reached at the cost of an attentive calibration of the resources shared by the different tasks searching for a trade-off with the multiple requirements they impose. We propose a methodology to address multimodal analytics within the same data processing approach to get a simplified architecture that can fully exploit the potential of the parallel processing of Big Data infrastructures. Multiple data sources are first integrated into a unified knowledge graph (KG). Different modalities of data are addressed by specifying
- Published
- 2022
- Full Text
- View/download PDF
18. Deep multimodal predictome for studying mental disorders
- Author
-
Md Abdur Rahaman, Jiayu Chen, Zening Fu, Noah Lewis, Armin Iraji, Theo G. M. van Erp, and Vince D. Calhoun
- Subjects
Neural Networks ,Neuroimaging ,Basic Behavioral and Social Science ,Computer ,single nucleotide polymorphism ,Behavioral and Social Science ,Genetics ,Humans ,Radiology, Nuclear Medicine and imaging ,multimodal deep learning ,Radiological and Ultrasound Technology ,saliency ,Mental Disorders ,Neurosciences ,functional network connectivity ,Experimental Psychology ,Serious Mental Illness ,Magnetic Resonance Imaging ,Brain Disorders ,resting-state functional and structural MRI ,Mental Health ,Good Health and Well Being ,Neurology ,schizophrenia classification ,Schizophrenia ,Cognitive Sciences ,Neurology (clinical) ,Anatomy - Abstract
Characterizing neuropsychiatric disorders is challenging due to heterogeneity in the population. We propose combining structural and functional neuroimaging and genomic data in a multimodal classification framework to leverage their complementary information. Our objectives are two-fold (i) to improve the classification of disorders and (ii) to introspect the concepts learned to explore underlying neural and biological mechanisms linked to mental disorders. Previous multimodal studies have focused on naïve neural networks, mostly perceptron, to learn modality-wise features and often assume equal contribution from each modality. Our focus is on the development of neural networks for feature learning and implementing an adaptive control unit for the fusion phase. Our mid fusion with attention model includes a multilayer feed-forward network, an autoencoder, a bi-directional long short-term memory unit with attention as the features extractor, and a linear attention module for controlling modality-specific influence. The proposed model acquired 92% (p 
- Published
- 2022
- Full Text
- View/download PDF
19. Paracetamol for multimodal analgesia
- Author
-
Ulderico Freo
- Subjects
Analgesics, Opioid ,multimodal analgesia ,paracetamol ,Pain, Postoperative ,Anti-Inflammatory Agents, Non-Steroidal ,Humans ,Neuralgia ,Pain Management ,General Medicine ,Analgesia ,Analgesics, Non-Narcotic ,Acetaminophen - Abstract
Pain and related disability remain a major social and therapeutic problem. Comorbidities and therapies increase drug interactions and side effects making pain management more compounded especially in the elderly who are the fastest-growing pain population. Multimodal analgesia consists of using two or more drugs and/or techniques that target different sites of pain, increasing the level of analgesia and decreasing adverse events from treatment. Paracetamol enhances multimodal analgesia in experimental and clinical pain states. Strong preclinical evidence supports that paracetamol has additive and synergistic interactions with anti-inflammatory, opioid and anti-neuropathic drugs in rodent models of nociceptive and neuropathic pain. Clinical studies in young and adult elderly patients confirm the utility of paracetamol in multimodal, non-opioid or opioid-sparing, therapies for the treatment of acute and chronic pain.Opioid and anti-inflammatory drugs are essential medications to relief pain; however, they may pose a serious health risk especially in elderly patients and in patients with medical conditions. Doctors are studying ways to reduce or eliminate their use. We wanted to see how well paracetamol works together with other painkillers to manage pain. Paracetamol (or acetaminophen) is one of the most prescribed medication for fever and pain. We found strong evidence that paracetamol given in association with other analgesic drugs enhances the pain relief in adult patients and in elderly adult patients, even though more studies are warranted in the latter. The use of paracetamol in combination with other analgesics is recommended by physicians and surgeons of different specialties.
- Published
- 2022
- Full Text
- View/download PDF
20. Putamen Structure and Function in Familial Risk for Depression: A Multimodal Imaging Study
- Author
-
Ardesheer, Talati, Milenna T, van Dijk, Lifang, Pan, Xuejun, Hao, Zhishun, Wang, Marc, Gameroff, Zhengchao, Dong, Jürgen, Kayser, Stewart, Shankman, Priya J, Wickramaratne, Jonathan, Posner, and Myrna M, Weissman
- Subjects
Depressive Disorder, Major ,Depression ,Putamen ,Humans ,Genetic Predisposition to Disease ,Prospective Studies ,Creatine ,Magnetic Resonance Imaging ,Multimodal Imaging ,Biological Psychiatry - Abstract
The putamen has been implicated in depressive disorders, but how its structure and function increase depression risk is not clearly understood. Here, we examined how putamen volume, neuronal density, and mood-modulated functional activity relate to family history and prospective course of depression.The study includes 115 second- and third-generation offspring at high or low risk for depression based on the presence or absence of major depressive disorder in the first generation. Offspring were followed longitudinally using semistructured clinical interviews blinded to their familial risk; putamen structure, neuronal integrity, and functional activation were indexed by structural magnetic resonance imaging (MRI), proton magnetic resonance spectroscopy (N-acetylaspartate/creatine ratio), and functional MRI activity modulated by valence and arousal components of a mood induction task, respectively.After adjusting for covariates, the high-risk individuals had lower putamen volume (standardized betas, β-Findings demonstrate abnormalities in putamen structure and function in individuals at high risk for major depressive disorder. Future studies should focus on this region as a potential biomarker for depressive illness, noting meanwhile that differences attributable to family history may peak at different ages based on which MRI modality is being used to assay them.
- Published
- 2022
- Full Text
- View/download PDF
21. Multimodal imaging in a classic case of unilateral retinocytoma
- Author
-
Sameeksha Agrawal, Ramesh Venkatesh, Nikitha Gurram Reddy, and Arpitha Pereira
- Subjects
Multimodal imaging ,medicine.medical_specialty ,business.industry ,Retinoblastoma ,Retinocytoma ,Eye Neoplasms ,Retinal Neoplasms ,General Medicine ,medicine.disease ,Multimodal Imaging ,Retinal Diseases ,Medicine ,Humans ,Radiology ,business - Abstract
Retinoma or retinocytoma is a spontaneously arrested or spontaneously regressed variant of retinoblastoma. With the advent of the latest non-invasive imaging techniques, it is possible to evaluate the microstructural and microvascular changes associated with this tumour. Although there are a few reports which describe the imaging findings in retinocytoma, information regarding retinocytoma on the multicolour imaging is lacking. Here, we describe the multimodal imaging features in a patient with classic features of retinocytoma with special emphasis on its multicolour imaging features.
- Published
- 2023
22. Integrating Chemical Language and Physicochemical Features for Enhanced Molecular Property Prediction with Multimodal Language Models
- Author
-
Soares, Eduardo Almeida, Brazil, Emilio Vital, KAREN F. A. GUTIERREZ, RENATO CERQUEIRA, DANIEL P. SANDERS, KRISTIN SCHMIDT, and DMITRY ZUBAREV
- Subjects
Biodegradability ,PFAS Toxicity ,MultiModal ,Language Models - Abstract
Here we present a novel multimodal language model (MultiModal-MoLFormer) approach for predicting molecular properties, which combines chemical language representation embeddings derived from the recently introduced MoLFormer chemical language model and physicochemical features. Our approach employs a causal multi-stage feature selection method that selects physicochemical features based on their direct causal-effect on a specific target property to predict. Specifically, we use Mordred descriptors as physicochemical features and Markov blanket causal graphs as the inference algorithm to identify the most relevant features. Our results demonstrate that our proposed approach outperforms existing state-of-the-art algorithms, including the chemical language-based MoLFormer and graph neural networks, in predicting complex tasks such as the biodegradability of general compounds and PFAS toxicity estimation. The MultiModal-MoLFormer model resulted in a significant improvement in the classification accuracy for EPA categories of PFAS Toxicity, from 0.75 to 0.84, when compared to the base MoLFormer approach. Additionally, our proposed approach achieves an accuracy of 0.94 for the biodegradability estimation task.
- Published
- 2023
- Full Text
- View/download PDF
23. ANÁLISE MULTIMODAL: NOÇÕES E PROCEDIMENTOS FUNDAMENTAIS
- Author
-
Paulo Roberto Gonçalves-Segundo and Theodoro Casalotti Farhat
- Subjects
LINGUAGEM GESTUAL ,multimodal analysis ,realization ,Facebook ,instantiation ,análise multimodal ,multimodalidade ,realização ,multimodality ,instanciação - Abstract
RESUMO Partindo da Teoria Sistêmico-Funcional e da proposta decomposicional de Bateman, Wildfeuer e Hiippala (2017), este artigo delineia noções e procedimentos que consideramos fundamentais para a análise de textos multimodais, particularmente os digitais, fornecendo uma proposta teórico-metodológica que permite ao analista um maior comprometimento com uma organização clara e replicável do procedimento analítico. Assim, em primeiro lugar introduzimos os conceitos fundamentais de realização e instanciação e suas implicações para o estudo da multimodalidade; depois, parte-se para a noção de “tela” e para a classificação multidimensional das materialidades semióticas; em seguida, explicitam-se os passos básicos para a fundamentação da análise de textos multimodais; e, finalmente, o modelo é exemplificado com a análise sumária da composição em telas de postagens de Facebook. ABSTRACT Drawing on Systemic Functional Theory and the decompositional model proposed by Bateman, Wildfeuer and Hiippala (2017), this paper outlines notions and procedures we consider fundamental for the analysis of multimodal texts, particularly digital ones, by providing a theoretical and methodological model that enables a greater commitment to a clear and replicable organization of the analytical procedure. We introduce the fundamental systemic functional concepts of realization and instantiation and their implications for the study of multimodality; next, we present the notion of ‘canvas’ and the multidimensional classification of semiotic materialities; then, we make explicit the basic steps for the substantiation of the analysis of multimodal texts; and, finally, we exemplify the model with a cursory analysis of the canvas composition of Facebook posts.
- Published
- 2022
- Full Text
- View/download PDF
24. New genres and new approaches: teaching and assessing product pitches from a multimodal perspective in the ESP classroom
- Author
-
Fortanet Gómez, Inmaculada and Edo Marzá, Nuria
- Subjects
Linguistics and Language ,Multimodal awareness ,Teaching-learning cycle ,Ciclo de enseñanza-aprendizaje ,Product Pitch (PP) ,Evaluación multimodal ,Apoyo ,Scaffolding ,Multimodal assessment ,Percepción de la multimodalidad ,Language and Linguistics ,Education - Abstract
One of the most innovative genres in today’s business communication is the product pitch (PP), mainly characterised by its multisemiotic nature (Daly & Davy, 2016), which makes it essential to take a multimodal approach to the analysis and teaching of this genre (Tamarit & Skorczynska, 2014; Valeiras-Jurado, 2017). However, despite the increasing importance of this oral genre in the business field, little research has been conducted on the teaching of PPs to business ESP students. The purpose of this paper is therefore to present an innovative pedagogical model consisting of a learner-led genre-based pedagogy based on a teaching-learning cycle that fosters critical thinking and multimodal awareness (Querol-Jullián & Fortanet-Gómez, 2019). Following the four stages of the cycle proposed, and with the constant scaffolding of the lecturer in the initial stages, a group of tertiary business students were requested to decode the multimodal ensembles of a YouTube PP to subsequently create their own PPs. These samples were assessed multimodally by the rest of the class (peer review) and by the teacher using an “all-mode-inclusive” rubric. This innovative pedagogical approach to a new genre increased students’ motivation and multimodal awareness, surpassing the traditionally exclusively language-bound teaching and assessment of ESP., Uno de los géneros más innovadores de la comunicación empresarial actual es el denominado “Product Pitch” (PP), caracterizado sobre todo por su naturaleza multisemiótica (Daly & Davy, 2016), que precisa un enfoque multimodal para su análisis y enseñanza (Tamarit &S korczynska, 2014; Valeiras-Jurado, 2017). Sin embargo, es escasa la investigación sobre la enseñanza de los PPs a estudiantes de inglés empresarial. El objetivo de este artículo es presentar un modelo pedagógico innovador basado en el estudio del género y dirigido por el propio aprendiz que se desarrolla en torno a un ciclo de enseñanza y aprendizaje que promueve el pensamiento crítico y la percepción de la multimodalidad (Querol-Julián & Fortanet-Gómez, 2019). Siguiendo las cuatro etapas del ciclo propuesto y con la ayuda constante del profesor, un grupo de estudiantes de negocios decodificó las agrupaciones multimodales de un PP de YouTube para posteriormente crear sus propios PP, que fueron evaluados desde el punto de vista multimodal por el resto de la clase (evaluación por pares) y por la profesora utilizando una rúbrica que incluye todos los modos. Este enfoque pedagógico innovador de un nuevo género aumentó la motivación de los estudiantes, así como su percepción multimodal, superando la enseñanza y la evaluación del inglés especializado exclusivamente centrada en el lenguaje., Ministry of Science, Innovation and Universities of the Spanish Government (grant number PGC2018-094823-B-I00), Research Promotion Plan at Universitat Jaume I (Spain) (grant ID: UJI-B2020-09)
- Published
- 2022
- Full Text
- View/download PDF
25. Estimation of Transfer Time from Multimodal Transit Services in the Paris Region
- Author
-
Fabien Leurent and BIAO YIN
- Subjects
multimodal transit ,average wait time ,transit speed ,transfer time ,linear regression model - Abstract
A reliable public transport system is beneficial for people traveling in the metropolitan area. Transfer time in multimodal transit networks has been highlighted as one of the measures of public transport service quality. In this paper, we propose a novel method to estimate the passengers’ transfer time between the transit modes (i.e., train, metro, and bus) based on the 2018 Household Travel Survey in the Paris region, France. The transit trips with a single transit leg are primarily studied, wherein average wait time and mode speeds are estimated through an integrated linear regression model. Based on these inferences, transfer time is deduced within the trips of multiple transit legs. The decomposition procedure of journey time facilitates the estimation of the time components, and reveals the transfer variability in mode, time, and space. From the results, we find that the transfer to the railway modes, especially to the metro, costs less time on average than the transfer to the bus in the study area. The transfer patterns in the morning and evening peak hours are different regarding the transfer duration and locations. Lastly, the results’ reliability, method scalability, and potential applications are discussed in detail.
- Published
- 2022
- Full Text
- View/download PDF
26. The situated deployment of the Italian presentative (e) hai. . ., ‘(and) you have. . .’ within routinized multimodal Gestalts in route mapping with visually impaired climbers
- Author
-
Monica Simone, Renata Galatolo, Simone, Monica, and Galatolo, Renata
- Subjects
Linguistics and Language ,Social Psychology ,Anthropology ,Communication ,Coach-athlete interaction, conversation analysis, grammar-body interface, interactional linguistics, intercorporeality, multimodal Gestalts, paraclimbing, presentative constructions ,Language and Linguistics - Abstract
Drawing on video-recorded data from pre-climbing route mapping with visually impaired climbers and a sight guide, this study uses conversation analysis to investigate the situated deployment of the Italian presentative (e) hai ‘(and) you have’ within locally routinized multimodal Gestalts. The study shows that the guide uses (e) hai to progress route mapping and engage the athlete in tactile actions that target specific features of the route. In this context, (e) hai is packaged with noun phrases, silent pauses, bodily movements, and touch. The arrangement of such syntactic and embodied components is shown to follow a recurrent trajectory in which, between (e) hai and its grammatical completion, syntactic suspension creates a dedicated slot for guide and athlete to physically attain the target object. Routine embeddedness of (e) hai within such arrangement is shown to provide specific affordances to the athletes to anticipate subsequent action and engage in its embodied implementation.
- Published
- 2022
- Full Text
- View/download PDF
27. Examining socially shared regulation and shared physiological arousal events with multimodal learning analytics
- Author
-
Andy Nguyen, Sanna Järvelä, Carolyn Rosé, Hanna Järvenoja, and Jonna Malmberg
- Subjects
multimodal data ,socially shared regulation ,physiological data ,collaborative learning ,process mining ,Education - Abstract
Socially shared regulation contributes to the success of collaborative learning. However, the assessment of socially shared regulation of learning (SSRL) faces several challenges in the effort to increase the understanding of collaborative learning and support outcomes due to the unobservability of the related cognitive and emotional processes. The recent development of trace-based assessment has enabled innovative opportunities to overcome the problem. Despite the potential of a trace-based approach to study SSRL, there remains a paucity of evidence on how trace-based evidence could be captured and utilised to assess and promote SSRL. This study aims to investigate the assessment of electrodermal activities (EDA) data to understand and support SSRL in collaborative learning, hence enhancing learning outcomes. The data collection involves secondary school students (N = 94) working collaboratively in groups through five science lessons. A multimodal data set of EDA and video data were examined to assess the relationship among shared arousals and interactions for SSRL. The results of this study inform the patterns among students’ physiological activities and their SSRL interactions to provide trace-based evidence for an adaptive and maladaptive pattern of collaborative learning. Furthermore, our findings provide evidence about how trace-based data could be utilised to predict learning outcomes in collaborative learning.
- Published
- 2022
- Full Text
- View/download PDF
28. Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
- Author
-
Li, Yuanchao, Bell, Peter, and Lai, Catherine
- Subjects
FOS: Computer and information sciences ,Sound (cs.SD) ,cs.MM ,impression recognition ,Affective computing ,Multimodal Fusion ,Computer Science - Sound ,Multimedia (cs.MM) ,cs.SD ,Audio and Speech Processing (eess.AS) ,FOS: Electrical engineering, electronic engineering, information engineering ,eess.AS ,Computer Science - Multimedia ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
As a sub-branch of affective computing, impression recognition, e.g., perception of speaker characteristics such as warmth or competence, is potentially a critical part of both human-human conversations and spoken dialogue systems. Most research has studied impressions only from the behaviors expressed by the speaker or the response from the listener, yet ignored their latent connection. In this paper, we perform impression recognition using a proposed listener adaptive cross-domain architecture, which consists of a listener adaptation function to model the causality between speaker and listener behaviors and a cross-domain fusion function to strengthen their connection. The experimental evaluation on the dyadic IMPRESSION dataset verified the efficacy of our method, producing concordance correlation coefficients of 78.8% and 77.5% in the competence and warmth dimensions, outperforming previous studies. The proposed method is expected to be generalized to similar dyadic interaction scenarios., Accepted to ICASSP2023. arXiv admin note: substantial text overlap with arXiv:2203.13932
- Published
- 2023
- Full Text
- View/download PDF
29. Multimodal registration across 3D point clouds and CT-volumes
- Author
-
Saiti, E. and Theoharis, T.
- Subjects
Human-Computer Interaction ,Multimodal ,3D volume ,General Engineering ,3D registration ,Data fusion ,Computer Graphics and Computer-Aided Design ,3D point cloud ,Alignment - Abstract
Multimodal registration is a challenging problem in visual computing, commonly faced during medical image-guided interventions, data fusion and 3D object retrieval. The main challenge of multimodal registration is finding accurate correspondence between modalities, since different modalities do not exhibit the same characteristics. This paper explores how the coherence of different modalities can be utilized for the challenging task of 3D multimodal registration. A novel deep learning multimodal registration framework is proposed by introducing a siamese deep learning architecture, especially designed for aligning and fusing modalities of different structural and physical principles. The cross-modal attention blocks lead the network to establish correspondences between features of different modalities. The proposed framework focuses on the alignment of 3D point clouds and the micro-CT 3D volumes of the same object. A multimodal dataset consisting of real micro-CT scans and their synthetically generated 3D models (point clouds) is presented and utilized for evaluating our methodology.
- Published
- 2022
- Full Text
- View/download PDF
30. An Application of Multimodal Text-Based Literacy Activities in Enhancing Early Children’s Literacy
- Author
-
Fatmawati, Endang, Saputra, Nanda, Ngongo, Magdalena, Purba, Ridwin, Herman, Herman, and Informasi dan Humas, Universitas Diponegoro
- Subjects
General Medicine ,literacy ,multimodal ,early childhood - Abstract
Salah satu program pengembangan yang sangat penting dalam pendidikan anak usia dini adalah pengembangan bahasa. Tujuan dari penelitian ini adalah untuk mengembangkan kegiatan literasi denngan menggunakan Kegiatan Literasi Berbasis Teks Multimodal untuk anak usia dini di salah satu sekolah TK di Pematangsiantar. Metode deskriptif kualitatif diterapkan dalam penelitian ini. Sumber data penelitian ini adalah anak usia dini sebanyak 26 siswa. Observasi, wawancara, dan dokumentasi digunakan untuk memperoleh data. Triangulasi teknik dan triangulasi sumber digunakan dalam analisis data. Proses interpretasi data yang terkumpul menggunakan analisis deskriptif kualitatif. Penelitian ini mengalami peningkatan kenyamanan lingkungan belajar peserta didik dengan persentase peningkatan rata-rata sebesar 15,8% sehingga kegiatan literasi berbasis multimodal ini layak untuk dikembangkan dan dilaksanakan pada anak usia dini. Para peneliti juga menyarankan agar pihak sekolah memahami pentingnya membangun lingkungan literasi yang baik di dalam kelas dalam mendukung penerapan literasi berbasis multimodal untuk perkembangan kemampuan literasi anak
- Published
- 2022
- Full Text
- View/download PDF
31. Multimodal Approach of Isolated Pulmonary Vasculitis: A Single-Institution Experience
- Author
-
Sehnaz Olgun Yildizeli, Halil Atas, G. Nural Bekiroğlu, Ahmet Zengin, Nevsun Inanc, Atakan Erkılınç, Emine Bozkurtlar, Haner Direskeneli, Fatma Alibaz-Oner, Bu¨lent Mutlu, Bedrettin Yildizeli, Mehmed Yanartaş, Cagatay Cimsit, Serpil Taş, and Ayşe Zehra Karakoç
- Subjects
Vasculitis ,Pulmonary and Respiratory Medicine ,medicine.medical_specialty ,Hypertension, Pulmonary ,Endarterectomy ,Disease ,Pulmonary Artery ,Pulmonary endarterectomy ,medicine.artery ,medicine ,Humans ,Pulmonary artery stenosis ,business.industry ,Multimodal therapy ,Middle Aged ,medicine.disease ,Surgery ,Chronic Disease ,Pulmonary artery ,Etiology ,Female ,Chronic thromboembolic pulmonary hypertension ,Pulmonary Embolism ,Cardiology and Cardiovascular Medicine ,business - Abstract
Background Isolated pulmonary vasculitis (IPV) is a single-organ vasculitis of unknown etiology and may mimic chronic thromboembolic pulmonary hypertension (CTEPH). The aim of this study was to review our clinical experience with pulmonary endarterectomy in patients with CTEPH secondary to IPV. Methods Data were collected prospectively for consecutive patients who underwent pulmonary endarterectomy and had a diagnosis of IPV at or after surgery. Results We identified nine patients (six female, median age 48 (23–55) years) with IPV. The diagnosis was confirmed after histopathological examination of all surgical materials. The mean duration of disease before surgery was 88.0 ±70.2 months. Exercise-induced dyspnea was the presenting symptom in all patients. Pulmonary endarterectomy was bilateral in six patients and unilateral in three. No mortality was observed, however, one patient had pulmonary artery stenosis and stent implantation was performed. All patients received immunosuppressive therapies after surgery. Mean pulmonary artery pressure decreased significantly from 30(19–67) mm Hg to 21(15–49) mm Hg after surgery (p Conclusions Isolated pulmonary vasculitis can mimic CTEPH, and these patients can be diagnosed with pulmonary endarterectomy. Furthermore, surgery has not only diagnostic but also therapeutic value for IPV when stenotic and/or thrombotic lesions are surgically accessible. A multidisciplinary experienced CTEPH team is critical for management of these unique patients.
- Published
- 2022
- Full Text
- View/download PDF
32. Multimodal MRI Reconstruction Assisted With Spatial Alignment Network
- Author
-
Kai Xuan, Lei Xiang, Xiaoqian Huang, Lichi Zhang, Shu Liao, Dinggang Shen, and Qian Wang
- Subjects
FOS: Computer and information sciences ,Radiological and Ultrasound Technology ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Magnetic Resonance Imaging ,Multimodal Imaging ,Computer Science Applications ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,Software - Abstract
In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the $k$-space. Recent research has shown that, considering the redundancy between different modalities, a target MRI modality under-sampled in the $k$-space can be more efficiently reconstructed with a fully-sampled reference MRI modality. However, we find that the performance of the aforementioned multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different modalities, which is actually common in clinical practice. In this paper, we improve the quality of multi-modal reconstruction by compensating for such spatial misalignment with a spatial alignment network. First, our spatial alignment network estimates the displacement between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the multi-modal reconstruction of the under-sampled target image. Also, considering the contrast difference between the target and reference images, we have designed a cross-modality-synthesis-based registration loss in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. The experiments on both clinical MRI and multi-coil $k$-space raw data demonstrate the superiority and robustness of the multi-modal MRI reconstruction empowered with our spatial alignment network. Our code is publicly available at \url{https://github.com/woxuankai/SpatialAlignmentNetwork}., Final version, IEEE Transactions on Medical Imaging, code available at \url{https://github.com/woxuankai/SpatialAlignmentNetwork}
- Published
- 2022
- Full Text
- View/download PDF
33. Marked improvement in hyperammonaemic encephalopathy from multimodal treatment of metastatic neuroendocrine tumour
- Author
-
Sally Louise Ayesa, Stephen Clarke, Alexander E. Davis, and David Chan
- Subjects
Oncology ,medicine.medical_specialty ,Poor prognosis ,Peptide receptor ,Encephalopathy ,Case Report ,New diagnosis ,03 medical and health sciences ,0302 clinical medicine ,Internal medicine ,medicine ,Multimodal treatment ,Humans ,Hyperammonemia ,Patient group ,Brain Diseases ,business.industry ,Incidence (epidemiology) ,General Medicine ,Middle Aged ,medicine.disease ,Combined Modality Therapy ,Neuroendocrine tumour ,Neuroendocrine Tumors ,030220 oncology & carcinogenesis ,030211 gastroenterology & hepatology ,Female ,Neurotoxicity Syndromes ,business - Abstract
Gastroenteropancreatic neuroendocrine tumours (GEPNETs) are a heterogenous group of tumours which are rising in incidence. Morbidity and mortality related to these tumours is dependent on the location of metastatic spread. Hyperammonaemia and subsequent encephalopathy has previously been described in GEPNET and is typically associated with a poor prognosis. We describe a case of a 55-year-old woman with hyperammonaemic encephalopathy and a new diagnosis of GEPNET. Given the poor prognosis and the outcomes in this patient group we feel this case highlights the benefit of a multimodality treatment approach including peptide receptor radionucleotide therapy and transarterial chemoembolisation.
- Published
- 2023
34. AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle Designs
- Author
-
Cobb, Adam D., Roy, Anirban, Elenius, Daniel, F. Michael Heim, Swenson, Brian, Whittington, Sydney, Walker, James D., Bapty, Theodore, Hite, Joseph, Ramani, Karthik, McComb, Christopher, and Jha, Susmit
- Subjects
FOS: Computer and information sciences ,Multi-Physics ,Computer Science - Artificial Intelligence ,Aircrafts ,Computational Engineering, Finance, and Science (cs.CE) ,Machine Learning ,Computer Science - Robotics ,Artificial Intelligence (cs.AI) ,Deep Learning ,AI ,Generative AI ,Multimodal ,CAD ,Computer Science - Computational Engineering, Finance, and Science ,Robotics (cs.RO) - Abstract
We present AircraftVerse, a publicly available aerial vehicle design dataset. Aircraft design encompasses different physics domains and, hence, multiple modalities of representation. The evaluation of these cyber-physical system (CPS) designs requires the use of scientific analytical and simulation models ranging from computer-aided design tools for structural and manufacturing analysis, computational fluid dynamics tools for drag and lift computation, battery models for energy estimation, and simulation models for flight control and dynamics. AircraftVerse contains 27,714 diverse air vehicle designs - the largest corpus of engineering designs with this level of complexity. Each design comprises the following artifacts: a symbolic design tree describing topology, propulsion subsystem, battery subsystem, and other design details; a STandard for the Exchange of Product (STEP) model data; a 3D CAD design using a stereolithography (STL) file format; a 3D point cloud for the shape of the design; and evaluation results from high fidelity state-of-the-art physics models that characterize performance metrics such as maximum flight distance and hover-time. We also present baseline surrogate models that use different modalities of design representation to predict design performance metrics, which we provide as part of our dataset release. Finally, we discuss the potential impact of this dataset on the use of learning in aircraft design and, more generally, in CPS. AircraftVerse is accompanied by a data card, and it is released under Creative Commons Attribution-ShareAlike (CC BY-SA) license. The dataset is hosted at https://zenodo.org/record/6525446, baseline models and code at https://github.com/SRI-CSL/AircraftVerse, and the dataset description at https://aircraftverse.onrender.com/., The dataset is hosted at https://zenodo.org/record/6525446, baseline models and code at https://github.com/SRI-CSL/AircraftVerse, and the dataset description at https://aircraftverse.onrender.com/
- Published
- 2023
- Full Text
- View/download PDF
35. Unlocking a multimodal archive of Southern Chinese martial arts through embodied cues
- Author
-
Yumeng Hou, Fadel Mamar Seydou, and Sarah Kenderdine
- Subjects
Multimodal retrieval ,Knowledge organisation ,Computational archival science ,Movement computing ,Martial arts ,Intangible cultural heritage ,Library and Information Sciences ,Information Systems - Abstract
PurposeDespite being an authentic carrier of various cultural practices, the human body is often underutilised to access the knowledge of human body. Digital inventions today have created new avenues to open up cultural data resources, yet mainly as apparatuses for well-annotated and object-based collections. Hence, there is a pressing need for empowering the representation of intangible expressions, particularly embodied knowledge within its cultural context. To address this issue, the authors propose to inspect the potential of machine learning methods to enhance archival knowledge interaction with intangible cultural heritage (ICH) materials.Design/methodology/approachThis research adopts a novel approach by combining movement computing with knowledge-specific modelling to support retrieving through embodied cues, which is applied to a multimodal archive documenting the cultural heritage (CH) of Southern Chinese martial arts.FindingsThrough experimenting with a retrieval engine implemented using the Hong Kong Martial Arts Living Archive (HKMALA) datasets, this work validated the effectiveness of the developed approach in multimodal content retrieval and highlighted the potential for the multimodal's application in facilitating archival exploration and knowledge discoverability.Originality/valueThis work takes a knowledge-specific approach to invent an intelligent encoding approach through a deep-learning workflow. This article underlines that the convergence of algorithmic reckoning and content-centred design holds promise for transforming the paradigm of archival interaction, thereby augmenting knowledge transmission via more accessible CH materials.
- Published
- 2023
- Full Text
- View/download PDF
36. Interactive robot teaching based on finger trajectory using multimodal RGB-D-T-data
- Author
-
Zhang, Yan, Fütterer, Richard, Notni, Gunther, and Publica
- Subjects
Artificial Intelligence ,point cloud processing ,multimodal image processing ,meshless finite difference solution ,finger trajectory recognition ,Computer Science Applications - Abstract
The concept of Industry 4.0 brings the change of industry manufacturing patterns that become more efficient and more flexible. In response to this tendency, an efficient robot teaching approach without complex programming has become a popular research direction. Therefore, we propose an interactive finger-touch based robot teaching schema using a multimodal 3D image (color (RGB), thermal (T) and point cloud (3D)) processing. Here, the resulting heat trace touching the object surface will be analyzed on multimodal data, in order to precisely identify the true hand/object contact points. These identified contact points are used to calculate the robot path directly. To optimize the identification of the contact points we propose a calculation scheme using a number of anchor points which are first predicted by hand/object point cloud segmentation. Subsequently a probability density function is defined to calculate the prior probability distribution of true finger trace. The temperature in the neighborhood of each anchor point is then dynamically analyzed to calculate the likelihood. Experiments show that the trajectories estimated by our multimodal method have significantly better accuracy and smoothness than only by analyzing point cloud and static temperature distribution.
- Published
- 2023
- Full Text
- View/download PDF
37. EMM-LC Fusion: Enhanced Multimodal Fusion for Lung Cancer Classification
- Author
-
James Barrett and Thiago Viana
- Subjects
QA75 ,deep learning ,lung cancer ,machine learning ,multimodal fusion ,General Earth and Planetary Sciences ,General Environmental Science - Abstract
Lung cancer (LC) is the most common cause of cancer-related deaths in the UK due to delayed diagnosis. The existing literature establishes a variety of factors which contribute to this, including the misjudgement of anatomical structure by doctors and radiologists. This study set out to develop a solution which utilises multiple modalities in order to detect the presence of LC. A review of the existing literature established failings within methods to exploit rich intermediate feature representations, such that it can capture complex multimodal associations between heterogenous data sources. The methodological approach involved the development of a novel machine learning (ML) model to facilitate quantitative analysis. The proposed solution, named EMM-LC Fusion, extracts intermediate features from a pre-trained modified AlignedXception model and concatenates these with linearly inflated features of Clinical Data Elements (CDE). The implementation was evaluated and compared against existing literature using F1 score, average precision (AP), and area under curve (AUC) as metrics. The findings presented in this study show a statistically significant improvement (p < 0.05) upon the previous fusion method, with an increase in F-Score from 0.402 to 0.508. The significance of this establishes that the extraction of intermediate features produces a fertile environment for the detection of intermodal relationships for the task of LC classification. This research also provides an architecture to facilitate the future implementation of alternative biomarkers for lung cancer, one of the acknowledged limitations of this study.
- Published
- 2022
- Full Text
- View/download PDF
38. Are Modic changes ‘Primary infective endplatitis’?—insights from multimodal imaging of non-specific low back pain patients and development of a radiological 'Endplate infection probability score'
- Author
-
S. Rajasekaran, B. T. Pushpa, Dilip Chand Raja Soundararajan, K. S. Sri Vijay Anand, Chandhan Murugan, Meena Nedunchelian, Rishi Mugesh Kanna, Ajoy Prasad Shetty, Chitraa Tangavel, and Raveendran Muthurajan
- Subjects
Radiography ,Lumbar Vertebrae ,Humans ,Orthopedics and Sports Medicine ,Surgery ,Intervertebral Disc Degeneration ,Low Back Pain ,Magnetic Resonance Imaging ,Multimodal Imaging ,Probability - Abstract
To probe the pathophysiological basis of Modic change (MC) by multimodal imaging rather than by MRI alone.Nineteen radiological signs found in mild infections and traumatic endplate fractures were identified by MRI and CT, and by elimination, three signs unique to infection and trauma were distilled. By ranking the Z score, radiological 'Endplate Infection Probability Score' (EIPS) was developed. The score's ability to differentiate infection and traumatic endplate changes (EPC) was validated in a fresh set of 15 patients each, with documented infection and trauma. The EIPS, ESR, CRP, and Numeric Pain Rating Scale (NRS) were then compared between 115 patients with and 80 patients without MC.The EIPS had a confidence of 66.4%, 83% and, 100% for scores of 4, 5 and, 6, respectively, for end plate changes suggesting infection. The mean EIPS was 4.85 ± 1.94 in patients with Modic changes compared to - 0.66 ± 0.49 in patients without Modic changes (p 0.001). Seventy-eight (67.64%) patients with MC had a score of 6, indicating high infection possibility. There was a difference in the NRS (p 0.01), ESR (p = 0.05), CRP (p 0.01), and type of pain (p 0.01) between patients with and without MC.Multimodal imaging showed many radiological signs not easily seen in MRI alone and thus missed in Modic classification. There were distinct radiological differences between EPCs of trauma and infection which allowed the development of an EIPS. The scores showed that 67.64% of our study patients with Modic changes had EPCs resembling infection rather than trauma suggesting the possibility of an infective aetiology and allowing us to propose an alternate theory of 'Primary Endplatitis'.
- Published
- 2022
- Full Text
- View/download PDF
39. Predicting Human Intentions in Human–Robot Hand-Over Tasks Through Multimodal Learning
- Author
-
Rui Li, Yunyi Jia, Weitian Wang, Yi Sun, and Yi Chen
- Subjects
Multimodal learning ,Control and Systems Engineering ,Computer science ,Human–computer interaction ,Task analysis ,Robot ,Wearable computer ,Electrical and Electronic Engineering ,Natural language ,Human–robot interaction ,Task (project management) ,Extreme learning machine - Abstract
In human-robot shared manufacturing contexts, product parts or tools hand-over between the robot and the human is an important collaborative task. Facilitating the robot to figure out and predict human hand-over intentions correctly to improve the task efficiency in human-robot collaboration is therefore a necessary issue to be addressed. In this study, a teaching-learning-prediction (TLP) framework is proposed for the robot to learn from its human partner's multimodal demonstrations and predict human hand-over intentions. In this approach, the robot can be programmed by the human through demonstrations utilizing natural language and wearable sensors according to task requirements and the human's working preferences. Then the robot learns from human hand-over demonstrations online via extreme learning machine (ELM) algorithms to update its cognition capacity, allowing the robot to use its learned policy to predict human intentions actively and assist its human companion in hand-over tasks. Experimental results and evaluations suggest that the human may program the robot easily by the proposed approach when the task changes, as the robot can effectively predict hand-over intentions with competitive accuracy to complete the hand-over tasks.
- Published
- 2022
- Full Text
- View/download PDF
40. Multi-Label and Multimodal Classifier for Affective States Recognition in Virtual Rehabilitation
- Author
-
Lorena Palafox, María del Carmen Lara, Nadia Berthouze, Amanda C de C Williams, Enrique Sucar, Luis R. Castrejón, Jorge Hernández-Franco, Jesus Joel Rivas, and Felipe Orihuela-Espina
- Subjects
Technology ,classifier chains ,Computer science ,1702 Cognitive Sciences ,Speech recognition ,Context (language use) ,facial expressions ,Computer Science, Artificial Intelligence ,Affective states ,virtual rehabilitation ,Naive Bayes classifier ,FACE ,EMOTION ,0801 Artificial Intelligence and Image Processing ,medicine ,Computer Science, Cybernetics ,affective states' dependency relationships ,posture ,multi-label classification ,Facial expression ,Science & Technology ,hand movements ,multimodal classification ,Process (computing) ,stroke ,Human-Computer Interaction ,Semi-Naive Bayesian classifier ,0806 Information Systems ,Computer Science ,finger pressure ,Anxiety ,Virtual rehabilitation ,Mutual exclusion ,medicine.symptom ,Classifier (UML) ,Software - Abstract
Computational systems that process multiple affective states may benefit from explicitly considering the interaction between the states to enhance their recognition performance. This work proposes the combination of a multi-label classifier, Circular Classifier Chain (CCC), with a multimodal classifier, Fusion using a Semi-Naive Bayesian classifier (FSNBC), to include explicitly the dependencies between multiple affective states during the automatic recognition process. This combination of classifiers is applied to a virtual rehabilitation context of post-stroke patients. We collected data from post-stroke patients, which include finger pressure, hand movements, and facial expressions during ten longitudinal sessions. Videos of the sessions were labelled by clinicians to recognize four states: tiredness, anxiety, pain, and engagement. Each state was modelled by the FSNBC receiving the information of finger pressure, hand movements, and facial expressions. The four FSNBCs were linked in the CCC to exploit the dependency relationships between the states. The convergence of CCC was reached by 5 iterations at most for all the patients. Results (ROC AUC)) of CCC with the FSNBC are over $0.940 \pm 0.045$ ( $mean \pm std.\,deviation$ ) for the four states. Relationships of mutual exclusion between engagement and all the other states and co-occurrences between pain and anxiety were detected and discussed.
- Published
- 2022
- Full Text
- View/download PDF
41. Evaluating the Effects of Disruptions on the Behavior of Travelers in a Multimodal Network Utilizing Agent-Based Simulation
- Author
-
Mahsa Rahimi Siegrist, Beda Büchel, and Francesco Corman
- Subjects
Multimodal network ,Mechanical Engineering ,Information system ,Traveler's behaviour ,Intelligent transportation systems ,Public transportation ,Civil and Structural Engineering - Abstract
Disruptions in transport networks have major adverse implications on passengers and service providers, as they can yield delays, decreased productivity, and inconvenience for travelers. Previous studies have considered the vulnerability of connections and infrastructures. Although such studies provide insights on general disruption management approaches, there is a lack of knowledge concerning integrated multi-level traffic management and its effects on travelers to reduce the impacts of disruptions. Integrated multi-level traffic management refers to coordinating individual network operations to create an interconnected mobility management system. This study sought to assess the management of road disruption utilizing multi-level disruption management. Multi-level disruption management is proposed that integrates an information dissemination strategy and allows changing the functionality of parking spaces to traffic lanes to facilitate the movement of travelers. The capacity/frequency of public transport vehicles is also increased to help travelers reach their destinations by changing to public transport mode. To achieve such goals, an extension to an agent-based simulation was developed. Numerical experiments are applied to a part of the city of Zürich. The results indicate that the proposed approach, multi-level disruption management in a multimodal network, can shorten travelers’ delays, especially comparing the effects of disruption management. Results show heterogeneity of behavior among agents. Adding lanes as a disruption management enhances the usage of car-mode by all agents, whereas it reduces the usage of car-mode by the directly affected agents, those who cannot pass the disrupted roads. In the presence of full information and increased capacity of transit vehicles, delay is reduced by 47%., Transportation Research Record, 2676 (11), ISSN:0361-1981, ISSN:2169-4052
- Published
- 2022
- Full Text
- View/download PDF
42. Following the Light: Use of Multimodal Imaging and Fiber Optic Spectroscopy to Evaluate Aging in Daylight Fluorescent Artists’ Pigments
- Author
-
Fiona Beckett and Aaron Shugar
- Subjects
fluorescent ,pigments ,fiber optic spectroscopy ,multimodal imaging ,Day-Glo ,fading ,genetic structures ,sense organs - Abstract
Daylight fluorescent artists’ colors have been well established as fugitive. Upon exposure to light, these vibrant colors can fade and exhibit color shifts. Artwork containing these fluorescent colorants presents complex challenges for art conservators faced with conserving these inherently problematic materials. This paper examined nine fluorescent colorants obtained from Kremer Pigmente, referred to the previous literature and research, and attempted to quantify the visual and photographic observations of fading and color changes. It provides additional information that could be useful in considering conservation documentation and treatment. Fiber optic spectroscopy using ultraviolet and visible light sources was used to measure the spectral shifts of the colorants before and after exposure to light. The fluorescent colors exhibited alterations in intensity coupled with primary peak shifts in the spectrum corresponding to the optical fading and color shifts. Multimodal imaging was executed to analyze the pigments in different regions of the spectrum before and after aging, which has not been documented before with these fluorescent colorants. Imaging in various regions of the spectrum indicated differences in absorption and reflectance between the pigments as captured by a modified camera. The results were compared to recently published research including the identification of the dyes present in the Kremer line of pigments. Multimodal imaging and fiber optic spectroscopy provided valuable information for future documentation and conservation of artworks containing these colorants. Specifically, these non-invasive techniques provide a method to document and identify the spectral changes between the aged and unaged pigment, graph and predict the direction of overall color change, and provide useful data for establishing future conservation treatment protocols.
- Published
- 2022
- Full Text
- View/download PDF
43. Virtual Reality and Embodiment in Multimodal Meaning Making
- Author
-
Kathy A. Mills, Laura Scholes, and Alinta Brown
- Subjects
Literature and Literary Theory ,composition ,Communication ,multimodal design ,virtual reality ,digital media ,embodiment - Abstract
Immersive virtual reality (VR) technology is becoming widespread in education, yet research of VR technologies for students’ multimodal communication is an emerging area of research in writing and literacies scholarship. Likewise, the significance of new ways of embodied meaning making in VR environments is undertheorized—a gap that requires attention given the potential for broadened use of the sensorium in multimodal language and literacy learning. This classroom research investigated multimodal composition using the virtual paint program Google Tilt Brush™ with 47 elementary school students (ages 10–11 years) using a head-mounted display and motion sensors. Multimodal analysis of video, screen capture, and think-aloud data attended to sensory-motor affordances and constraints for embodiment. Modal constraints were the immateriality of the virtual text, virtual disembodiment, and somatosensory mismatch between the virtual and physical worlds. Potentials for new forms of embodied multimodal representation in VR involved extensive bodily, haptic, and locomotive movement. The findings are significant given that research of embodied cognition points to sensorimotor action as the basis for language and communication.
- Published
- 2022
- Full Text
- View/download PDF
44. Prevalence, multimodal imaging and genotype-phenotype assessment of trauma related subretinal fibrosis in stargardt disease
- Author
-
B Jimenez-Rolando, B Garcia-Sandoval, M Del Pozo-Valero, C Ayuso, M Garcia-Ferreira, M Abellanas, S Campos-Seco, and E Carreño
- Subjects
genetic structures ,General Medicine ,Fibrosis ,Multimodal Imaging ,eye diseases ,Lipofuscin ,Ophthalmology ,Phenotype ,Prevalence ,Humans ,Stargardt Disease ,Fluorescein Angiography ,Tomography, Optical Coherence ,Retrospective Studies - Abstract
Background and Objectives Stargardt disease produces lipofuscin accumulation predisposing to subretinal fibrosis (SRFib) after ocular trauma. Noninvasive imaging techniques allow in vivo assessment. The purpose of this study is to determine the prevalence of SRFib in a cohort of Stargardt patients, the presence of history of ocular trauma, the clinical features and possible genotype-phenotype associations in Stargardt patients with SRFib. Methods We evaluated retrospectively 106 Stargardt patients and analysed the multimodal imaging and the genotype of patients with SRFib. Results Six patients exhibited SRFib, three of them with history of ocular trauma. Multimodal imaging showed extensive SRFib principally in the temporal midperipheral retina with no fluid associated. SRFib was better defined by short wavelength autofluorescence and spectral domain optical coherence tomography and appeared clinically stable over time. There was no particular genotype associated to SRFib. Conclusion SRFib occurs in a significant percentage of patients with Stargardt disease and can be diagnosed through multimodal imaging regardless the history of trauma, further sustaining the importance of an appropriate imaging in such patients. No genotype-phenotype association has been established, supporting the traumatic etiology in half of cases. The remaining cases may be classified as idiopathic or have a minimal trauma occurring early in life that may be not recalled by the patients.
- Published
- 2022
- Full Text
- View/download PDF
45. A novel multimodal fusion network based on a joint-coding model for lane line segmentation
- Author
-
Zhiwei Li, Zhenhong Zou, Xinyu Zhang, Huaping Liu, Amir Hussain, and Jun Li
- Subjects
Computer science ,Pipeline (computing) ,Perspective (graphical) ,Ranging ,Information theory ,computer.software_genre ,Multimodal learning ,Hardware and Architecture ,Signal Processing ,Segmentation ,Data mining ,computer ,Software ,Information Systems ,Communication channel ,Coding (social sciences) - Abstract
There has recently been growing interest in utilizing multimodal sensors to achieve robust lane line segmentation. In this paper, we introduce a novel multimodal fusion architecture from an information theory perspective, and demonstrate its practical utility using Light Detection and Ranging (LiDAR) camera fusion networks. In particular, we develop, for the first time, a multimodal fusion network as a joint coding model, where each single node, layer, and pipeline is represented as a channel. The forward propagation is thus equal to the information transmission in the channels. Then, we can qualitatively and quantitatively analyze the effect of different fusion approaches. We argue the optimal fusion architecture is related to the essential capacity and its allocation based on the source and channel. To test this multimodal fusion hypothesis, we progressively determine a series of multimodal models based on the proposed fusion methods and evaluate them on the KITTI and the A2D2 datasets. Our optimal fusion network achieves 85%+ lane line accuracy and 98.7%+ overall. The performance gap among the models will inform continuing future research into development of optimal fusion algorithms for the deep multimodal learning community.
- Published
- 2022
- Full Text
- View/download PDF
46. A Multilayer and Multimodal-Fusion Architecture for Simultaneous Recognition of Endovascular Manipulations and Assessment of Technical Skills
- Author
-
Zeng-Guang Hou, Zhen-Liang Ni, Yan-Jie Zhou, Xiao-Hu Zhou, Liu Shiqi, Feng Zhenqiu, Xiao-Liang Xie, Gui-Bin Bian, and Rui-Qi Li
- Subjects
Multimodal fusion ,Computer science ,medicine.medical_treatment ,Machine learning ,computer.software_genre ,Clinical success ,Motion (physics) ,Percutaneous Coronary Intervention ,medicine ,Humans ,Learning ,Electrical and Electronic Engineering ,Technical skills ,Architecture ,business.industry ,Percutaneous coronary intervention ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Clinical Competence ,Artificial intelligence ,business ,computer ,Algorithms ,Software ,Clinical skills ,Information Systems - Abstract
The clinical success of the percutaneous coronary intervention (PCI) is highly dependent on endovascular manipulation skills and dexterous manipulation strategies of interventionalists. However, the analysis of endovascular manipulations and related discussion for technical skill assessment are limited. In this study, a multilayer and multimodal-fusion architecture is proposed to recognize six typical endovascular manipulations. The synchronously acquired multimodal motion signals from ten subjects are used as the inputs of the architecture independently. Six classification-based and two rule-based fusion algorithms are evaluated for performance comparisons. The recognition metrics under the determined architecture are further used to assess technical skills. The experimental results indicate that the proposed architecture can achieve the overall accuracy of 96.41%, much higher than that of a single-layer recognition architecture (92.85%). In addition, the multimodal fusion brings significant performance improvement in comparison with single-modal schemes. Furthermore, the K -means-based skill assessment can obtain an accuracy of 95% to cluster the attempts made by different skill-level groups. These hopeful results indicate the great possibility of the architecture to facilitate clinical skill assessment and skill learning.
- Published
- 2022
- Full Text
- View/download PDF
47. Learning With Privileged Multimodal Knowledge for Unimodal Segmentation
- Author
-
Cheng Chen, Yueming Jin, Quande Liu, Qi Dou, and Pheng-Ann Heng
- Subjects
Scheme (programming language) ,Modality (human–computer interaction) ,Modalities ,Radiological and Ultrasound Technology ,business.industry ,Computer science ,Inference ,Heart ,Machine learning ,computer.software_genre ,Computer Science Applications ,Multimodal learning ,Encoding (memory) ,Humans ,Segmentation ,Neural Networks, Computer ,Artificial intelligence ,Electrical and Electronic Engineering ,Set (psychology) ,business ,computer ,Software ,computer.programming_language - Abstract
Multimodal learning usually requires a complete set of modalities during inference to maintain performance. Although training data can be well-prepared with high-quality multiple modalities, in many cases of clinical practice, only one modality can be acquired and important clinical evaluations have to be made based on the limited single modality information. In this work, we propose a privileged knowledge learning framework with the 'Teacher-Student' architecture, in which the complete multimodal knowledge that is only available in the training data (called privileged information) is transferred from a multimodal teacher network to a unimodal student network, via both a pixel-level and an image-level distillation scheme. Specifically, for the pixel-level distillation, we introduce a regularized knowledge distillation loss which encourages the student to mimic the teacher's softened outputs in a pixel-wise manner and incorporates a regularization factor to reduce the effect of incorrect predictions from the teacher. For the image-level distillation, we propose a contrastive knowledge distillation loss which encodes image-level structured information to enrich the knowledge encoding in combination with the pixel-level distillation. We extensively evaluate our method on two different multi-class segmentation tasks, i.e., cardiac substructure segmentation and brain tumor segmentation. Experimental results on both tasks demonstrate that our privileged knowledge learning is effective in improving unimodal segmentation and outperforms previous methods.
- Published
- 2022
- Full Text
- View/download PDF
48. Spatiotemporal Multimodal Learning With 3D CNNs for Video Action Recognition
- Author
-
Yibin Li, Xin Ma, and Hanbo Wu
- Subjects
Modality (human–computer interaction) ,Computer science ,business.industry ,Data stream mining ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,ENCODE ,Convolutional neural network ,Multimodal learning ,Discriminative model ,Media Technology ,RGB color model ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Pose - Abstract
Extracting effective spatial-temporal information is significantly important for video-based action recognition. Recently 3D convolutional neural networks (3D CNNs) that could simultaneously encode spatial and temporal dynamics in videos have made considerable progress in action recognition. However, almost all existing 3D CNN-based methods recognize human actions only using RGB videos. The single modality may limit the performance capacity of 3D networks. In this paper, we extend 3D CNN to depth and pose data besides RGB data to evaluate its capacity for spatiotemporal multimodal learning for video action recognition. We propose a novel multimodal two-stream 3D network framework, which can exploit complementary multimodal information to improve the recognition performance. Specifically, we first construct two discriminative video representations under depth and pose data modalities respectively, referred as depth residual dynamic image sequence (DRDIS) and pose estimation map sequence (PEMS). DRDIS captures spatial-temporal evolution of actions in depth videos by progressively aggregating the local motion information. PEMS eliminates the interference of cluttered backgrounds and describes the spatial configuration of body parts intuitively. The multimodal two-stream 3D CNN deals with two separate data streams to learn spatiotemporal features from DRDIS and PEMS representations. Finally, the classification scores from two streams are fused for action recognition. We conduct extensive experiments on four challenging action recognition datasets. The experimental results verify the effectiveness and superiority of our proposed method.
- Published
- 2022
- Full Text
- View/download PDF
49. Multimodal imaging in pachychoroid spectrum
- Author
-
Hamid Ahmadieh, Sare Safi, Kiana Hassanpour, and Hamid Safi
- Subjects
medicine.medical_specialty ,genetic structures ,Indocyanine green angiography ,Multimodal Imaging ,Oct angiography ,Optical coherence tomography ,Ophthalmology ,medicine ,Humans ,Fluorescein Angiography ,Retrospective Studies ,Multimodal imaging ,Retina ,medicine.diagnostic_test ,Choroid ,business.industry ,Fluorescein angiography ,Choroidal Neovascularization ,eye diseases ,Choroidal neovascularization ,medicine.anatomical_structure ,Central Serous Chorioretinopathy ,Imaging technology ,sense organs ,medicine.symptom ,business ,Tomography, Optical Coherence ,psychological phenomena and processes - Abstract
Diagnostic investigation on pachychoroid spectrum disease (PSD) has been growing along with the rapid advancement of imaging technology. In optical coherence tomography (OCT)-based studies, choroidal thickness profile, luminal and stromal choroidal ratio, and abnormalities in the neurosensory retina have demonstrated various patterns in different clinical entities related to PSD. The emerging role of OCT angiography ()CTA) has been expanded to involve the quantitative analysis of the OCTA parameters in different clinical entities of PSD and to evaluate the choriocapillaris signal void and vessel density as indicators of choriocapillaris ischemia. OCTA has broadened our knowledge in characterization and assessment of both active and quiescent choroidal neovascularization and its association with treatment response. Recent studies using indocyanine green angiography have focused on the evaluation of choroidal vascular hyperpermeability and its relationship with other pachychoroid related features. Ultra-widefield indocyanine green angiography enables observation and characterization of peripheral choroidal pathologies and their associations with macular abnormalities. Multicolor imaging is an emerging modality with the capability to demonstrate early abnormalities in PSD. We summarize all investigations reflecting the new insights into the application of multimodal imaging for PSD and focus on novel findings observed in different clinical entities with each imaging modality.
- Published
- 2022
- Full Text
- View/download PDF
50. Region-attentive multimodal neural machine translation
- Author
-
Yuting Zhao, Mamoru Komachi, Tomoyuki Kajiwara, and Chenhui Chu
- Subjects
Multimodal neural machine translation ,Semantic image regions ,Object detection ,Artificial Intelligence ,Cognitive Neuroscience ,Recurrent neural network ,Self-attention network ,Computer Science Applications - Abstract
We propose a multimodal neural machine translation (MNMT) method with semantic image regions called region-attentive multimodal neural machine translation (RA-NMT). Existing studies on MNMT have mainly focused on employing global visual features or equally sized grid local visual features extracted by convolutional neural networks (CNNs) to improve translation performance. However, they neglect the effect of semantic information captured inside the visual features. This study utilizes semantic image regions extracted by object detection for MNMT and integrates visual and textual features using two modality-dependent attention mechanisms. The proposed method was implemented and verified on two neural architectures of neural machine translation (NMT): recurrent neural network (RNN) and self-attention network (SAN). Experimental results on different language pairs of Multi30k dataset show that our proposed method improves over baselines and outperforms most of the state-of-the-art MNMT methods. Further analysis demonstrates that the proposed method can achieve better translation performance because of its better visual feature use.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.