Back to Search Start Over

Can Multimodal Large Language Model Think Analogically?

Authors :
Guo, Diandian
Cao, Cong
Yuan, Fangfang
Wang, Dakui
Ma, Wei
Liu, Yanbing
Fu, Jianhui
Publication Year :
2024

Abstract

Analogical reasoning, particularly in multimodal contexts, is the foundation of human perception and creativity. Multimodal Large Language Model (MLLM) has recently sparked considerable discussion due to its emergent capabilities. In this paper, we delve into the multimodal analogical reasoning capability of MLLM. Specifically, we explore two facets: \textit{MLLM as an explainer} and \textit{MLLM as a predictor}. In \textit{MLLM as an explainer}, we primarily focus on whether MLLM can deeply comprehend multimodal analogical reasoning problems. We propose a unified prompt template and a method for harnessing the comprehension capabilities of MLLM to augment existing models. In \textit{MLLM as a predictor}, we aim to determine whether MLLM can directly solve multimodal analogical reasoning problems. The experiments show that our approach outperforms existing methods on popular datasets, providing preliminary evidence for the analogical reasoning capability of MLLM.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.01307
Document Type :
Working Paper