Back to Search Start Over

Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales

Authors :
Gao, Minghe
Chen, Shuang
Pang, Liang
Yao, Yuan
Dang, Jisheng
Zhang, Wenqiao
Li, Juncheng
Tang, Siliang
Zhuang, Yueting
Chua, Tat-Seng
Publication Year :
2024

Abstract

The remarkable performance of Multimodal Large Language Models (MLLMs) has unequivocally demonstrated their proficient understanding capabilities in handling a wide array of visual tasks. Nevertheless, the opaque nature of their black-box reasoning processes persists as an enigma, rendering them uninterpretable and struggling with hallucination. Their ability to execute intricate compositional reasoning tasks is also constrained, culminating in a stagnation of learning progression for these models. In this work, we introduce Fact, a novel paradigm designed to generate multimodal rationales that are faithful, concise, and transferable for teaching MLLMs. This paradigm utilizes verifiable visual programming to generate executable code guaranteeing faithfulness and precision. Subsequently, through a series of operations including pruning, merging, and bridging, the rationale enhances its conciseness. Furthermore, we filter rationales that can be transferred to end-to-end paradigms from programming paradigms to guarantee transferability. Empirical evidence from experiments demonstrates the superiority of our method across models of varying parameter sizes, significantly enhancing their compositional reasoning and generalization ability. Our approach also reduces hallucinations owing to its high correlation between images and text.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.11129
Document Type :
Working Paper