1. VASCAR: Content-Aware Layout Generation via Visual-Aware Self-Correction
- Author
-
Zhang, Jiahao, Yoshihashi, Ryota, Kitada, Shunsuke, Osanai, Atsuki, and Nakashima, Yuta
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Large language models (LLMs) have proven effective for layout generation due to their ability to produce structure-description languages, such as HTML or JSON, even without access to visual information. Recently, LLM providers have evolved these models into large vision-language models (LVLM), which shows prominent multi-modal understanding capabilities. Then, how can we leverage this multi-modal power for layout generation? To answer this, we propose Visual-Aware Self-Correction LAyout GeneRation (VASCAR) for LVLM-based content-aware layout generation. In our method, LVLMs iteratively refine their outputs with reference to rendered layout images, which are visualized as colored bounding boxes on poster backgrounds. In experiments, we demonstrate that our method combined with the Gemini. Without any additional training, VASCAR achieves state-of-the-art (SOTA) layout generation quality outperforming both existing layout-specific generative models and other LLM-based methods.
- Published
- 2024