1. Multimodal Methods for Analyzing Learning and Training Environments: A Systematic Literature Review
- Author
-
Cohn, Clayton, Davalos, Eduardo, Vatral, Caleb, Fonteles, Joyce Horn, Wang, Hanchen David, Ma, Meiyi, and Biswas, Gautam
- Subjects
Computer Science - Machine Learning ,Computer Science - Multimedia - Abstract
Recent technological advancements have enhanced our ability to collect and analyze rich multimodal data (e.g., speech, video, and eye gaze) to better inform learning and training experiences. While previous reviews have focused on parts of the multimodal pipeline (e.g., conceptual models and data fusion), a comprehensive literature review on the methods informing multimodal learning and training environments has not been conducted. This literature review provides an in-depth analysis of research methods in these environments, proposing a taxonomy and framework that encapsulates recent methodological advances in this field and characterizes the multimodal domain in terms of five modality groups: Natural Language, Video, Sensors, Human-Centered, and Environment Logs. We introduce a novel data fusion category -- mid fusion -- and a graph-based technique for refining literature reviews, termed citation graph pruning. Our analysis reveals that leveraging multiple modalities offers a more holistic understanding of the behaviors and outcomes of learners and trainees. Even when multimodality does not enhance predictive accuracy, it often uncovers patterns that contextualize and elucidate unimodal data, revealing subtleties that a single modality may miss. However, there remains a need for further research to bridge the divide between multimodal learning and training studies and foundational AI research., Comment: Submitted to ACM Computing Surveys. Currently under review
- Published
- 2024