695 results on '"Louis-Philippe Morency"'
Search Results
152. Constrained Ensemble Initialization for Facial Landmark Tracking in Video.
153. Curriculum Learning for Facial Expression Recognition.
154. Local-Global Landmark Confidences for Face Recognition.
155. Investigating Facial Behavior Indicators of Suicidal Ideation.
156. Computational Analysis of Acoustic Descriptors in Psychotic Patients.
157. Temporal Attention-Gated Model for Robust Sequence Classification.
158. Convolutional Experts Constrained Local Model for Facial Landmark Detection.
159. Temporally Selective Attention Model for Social and Affective State Recognition in Multimedia Content.
160. Context-Dependent Sentiment Analysis in User-Generated Videos.
161. Affect-LM: A Neural Language Model for Customizable Affective Text Generation.
162. Combating Human Trafficking with Multimodal Deep Models.
163. Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis.
164. Local-global ranking for facial expression intensity estimation.
165. Hand2Face: Automatic synthesis and recognition of hand over face occlusions.
166. Visual attention in schizophrenia: Eye contact and gaze aversion during clinical interactions.
167. Automatically predicting human knowledgeability through non-verbal cues.
168. Multimodal sentiment analysis with word-level fusion and reinforcement learning.
169. Exceptionally Social: Design of an Avatar-Mediated Interactive System for Promoting Social Skills in Children with Autism.
170. Learning Factorized Multimodal Representations.
171. Induced Attention Invariance: Defending VQA Models against Adversarial Attacks.
172. Think Locally, Act Globally: Federated Learning with Local and Global Representations.
173. Learning Not to Learn in the Presence of Noisy Labels.
174. Improving Aspect-Level Sentiment Analysis with Aspect Extraction.
175. Interpretable Multimodal Routing for Human Multimodal Language.
176. Demystifying Self-Supervised Learning: An Information-Theoretical Framework.
177. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment.
178. Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study.
179. MTGAT: Multimodal Temporal Graph Attention Networks for Unaligned Human Multimodal Language Sequences.
180. Unsupervised Domain Adaptation for Visual Navigation.
181. What Gives the Answer Away? Question Answering Bias Analysis on Video QA Datasets.
182. GazeDirector: Fully Articulated Eye Gaze Redirection in Video.
183. The Future Belongs to the Curious: Towards Automatic Understanding and Recognition of Curiosity in Children.
184. Unsupervised Text Recap Extraction for TV Series.
185. Riding an emotional roller-coaster: A multimodal study of young child's math problem solving activities.
186. Representation Learning for Speech Emotion Recognition.
187. Manipulating the Perception of Virtual Audiences Using Crowdsourced Behaviors.
188. Learning an appearance-based gaze estimator from one million synthesised images.
189. An unsupervised approach to glottal inverse filtering.
190. Automatic Behavior Analysis During a Clinical Interview with a Virtual Human.
191. Recognizing Human Actions in the Motion Trajectories of Shapes.
192. A 3D Morphable Eye Region Model for Gaze Estimation.
193. Extending Long Short-Term Memory for Multi-View Structured Learning.
194. Deep multimodal fusion for persuasiveness prediction.
195. EmoReact: a multimodal approach and dataset for recognizing emotional responses in children.
196. OpenFace: An open source facial behavior analysis toolkit.
197. Multimodal Machine Learning: Integrating Language, Vision and Speech.
198. PANEL: Challenges for Multimedia/Multimodal Research in the Next Decade.
199. Adolescent Suicidal Risk Assessment in Clinician-Patient Interaction.
200. MultiSense - Context-Aware Nonverbal Behavior Analysis Framework: A Psychological Distress Use Case.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.