Back to Search Start Over

M$^{2}$Chat: Empowering VLM for Multimodal LLM Interleaved Text-Image Generation

Authors :
Chi, Xiaowei
Zhang, Rongyu
Jiang, Zhengkai
Liu, Yijiang
Wang, Yatian
Qi, Xingqun
Luo, Wenhan
Gao, Peng
Zhang, Shanghang
Liu, Qifeng
Guo, Yike
Publication Year :
2023

Abstract

While current LLM chatbots like GPT-4V bridge the gap between human instructions and visual representations to enable text-image generations, they still lack efficient alignment methods for high-fidelity performance on multiple downstream tasks. In this paper, we propose \textbf{$M^{2}Chat$}, a novel unified multimodal LLM framework for generating interleaved text-image conversation across various scenarios. Specifically, we propose an $M^{3}Adapter$ that efficiently integrates granular low-level visual information and high-level semantic features from multi-modality prompts. Upon the well-aligned fused feature, $M^{3}Adapter$ tailors a learnable gating strategy to balance the model creativity and consistency across various tasks adaptively. Moreover, to further enhance the effectiveness of $M^{3}Adapter$ while preserving the coherence of semantic context comprehension, we introduce a two-stage $M^{3}FT$ fine-tuning strategy. This strategy optimizes disjoint groups of parameters for image-text alignment and visual-instruction respectively. Extensive experiments demonstrate our $M^{2}Chat$ surpasses state-of-the-art counterparts across diverse benchmarks, showcasing its prowess in interleaving generation, storytelling, and multimodal dialogue systems. The demo and code are available at \red{https://mattie-e.github.io/M2Chat.github.io}.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.17963
Document Type :
Working Paper