Back to Search Start Over

Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance

Authors :
Hur, Jiwan
Lee, Dong-Jae
Han, Gyojin
Choi, Jaehyun
Jeon, Yunho
Kim, Junmo
Publication Year :
2024

Abstract

Masked generative models (MGMs) have shown impressive generative ability while providing an order of magnitude efficient sampling steps compared to continuous diffusion models. However, MGMs still underperform in image synthesis compared to recent well-developed continuous diffusion models with similar size in terms of quality and diversity of generated samples. A key factor in the performance of continuous diffusion models stems from the guidance methods, which enhance the sample quality at the expense of diversity. In this paper, we extend these guidance methods to generalized guidance formulation for MGMs and propose a self-guidance sampling method, which leads to better generation quality. The proposed approach leverages an auxiliary task for semantic smoothing in vector-quantized token space, analogous to the Gaussian blur in continuous pixel space. Equipped with the parameter-efficient fine-tuning method and high-temperature sampling, MGMs with the proposed self-guidance achieve a superior quality-diversity trade-off, outperforming existing sampling methods in MGMs with more efficient training and sampling costs. Extensive experiments with the various sampling hyperparameters confirm the effectiveness of the proposed self-guidance.<br />Comment: NeurIPS 2024. Code is available at: https://github.com/JiwanHur/UnlockMGM

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.13136
Document Type :
Working Paper