Back to Search Start Over

Multimodal Markup Document Models for Graphic Design Completion

Authors :
Kikuchi, Kotaro
Inoue, Naoto
Otani, Mayu
Simo-Serra, Edgar
Yamaguchi, Kota
Publication Year :
2024

Abstract

This paper presents multimodal markup document models (MarkupDM) that can generate both markup language and images within interleaved multimodal documents. Unlike existing vision-and-language multimodal models, our MarkupDM tackles unique challenges critical to graphic design tasks: generating partial images that contribute to the overall appearance, often involving transparency and varying sizes, and understanding the syntax and semantics of markup languages, which play a fundamental role as a representational format of graphic designs. To address these challenges, we design an image quantizer to tokenize images of diverse sizes with transparency and modify a code language model to process markup languages and incorporate image modalities. We provide in-depth evaluations of our approach on three graphic design completion tasks: generating missing attribute values, images, and texts in graphic design templates. Results corroborate the effectiveness of our MarkupDM for graphic design tasks. We also discuss the strengths and weaknesses in detail, providing insights for future research on multimodal document generation.<br />Comment: Project page: https://cyberagentailab.github.io/MarkupDM/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.19051
Document Type :
Working Paper