Back to Search Start Over

Instruct-Imagen: Image Generation with Multi-modal Instruction

Authors :
Hu, Hexiang
Chan, Kelvin C. K.
Su, Yu-Chuan
Chen, Wenhu
Li, Yandong
Sohn, Kihyuk
Zhao, Yang
Ben, Xue
Gong, Boqing
Cohen, William
Chang, Ming-Wei
Jia, Xuhui
Publication Year :
2024

Abstract

This paper presents instruct-imagen, a model that tackles heterogeneous image generation tasks and generalizes across unseen tasks. We introduce *multi-modal instruction* for image generation, a task representation articulating a range of generation intents with precision. It uses natural language to amalgamate disparate modalities (e.g., text, edge, style, subject, etc.), such that abundant generation intents can be standardized in a uniform format. We then build instruct-imagen by fine-tuning a pre-trained text-to-image diffusion model with a two-stage framework. First, we adapt the model using the retrieval-augmented training, to enhance model's capabilities to ground its generation on external multimodal context. Subsequently, we fine-tune the adapted model on diverse image generation tasks that requires vision-language understanding (e.g., subject-driven generation, etc.), each paired with a multi-modal instruction encapsulating the task's essence. Human evaluation on various image generation datasets reveals that instruct-imagen matches or surpasses prior task-specific models in-domain and demonstrates promising generalization to unseen and more complex tasks.<br />Comment: 20 pages, 18 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.01952
Document Type :
Working Paper