Back to Search Start Over

MetaMorph: Multimodal Understanding and Generation via Instruction Tuning

Authors :
Tong, Shengbang
Fan, David
Zhu, Jiachen
Xiong, Yunyang
Chen, Xinlei
Sinha, Koustuv
Rabbat, Michael
LeCun, Yann
Xie, Saining
Liu, Zhuang
Publication Year :
2024

Abstract

In this work, we propose Visual-Predictive Instruction Tuning (VPiT) - a simple and effective extension to visual instruction tuning that enables a pretrained LLM to quickly morph into an unified autoregressive model capable of generating both text and visual tokens. VPiT teaches an LLM to predict discrete text tokens and continuous visual tokens from any input sequence of image and text data curated in an instruction-following format. Our empirical investigation reveals several intriguing properties of VPiT: (1) visual generation ability emerges as a natural byproduct of improved visual understanding, and can be unlocked efficiently with a small amount of generation data; (2) while we find understanding and generation to be mutually beneficial, understanding data contributes to both capabilities more effectively than generation data. Building upon these findings, we train our MetaMorph model and achieve competitive performance on both visual understanding and generation. In visual generation, MetaMorph can leverage the world knowledge and reasoning abilities gained from LLM pretraining, and overcome common failure modes exhibited by other generation models. Our results suggest that LLMs may have strong "prior" vision capabilities that can be efficiently adapted to both visual understanding and generation with a relatively simple instruction tuning process.<br />Comment: Project page at tsb0601.github.io/metamorph

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.14164
Document Type :
Working Paper