1. Optimizing Prompts Using In-Context Few-Shot Learning for Text-to-Image Generative Models
- Author
-
Seunghun Lee, Jihoon Lee, Chan Ho Bae, Myung-Seok Choi, Ryong Lee, and Sangtae Ahn
- Subjects
In-context few-shot learning ,pretrained language model ,prompt optimization ,text-to-image generation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Recently, various text-to-image generative models have been released, demonstrating their ability to generate high-quality synthesized images from text prompts. Despite these advancements, determining the appropriate text prompts to obtain desired images remains challenging. The quality of the synthesized images heavily depends on the user input, making it difficult to achieve consistent and satisfactory results. This limitation has sparked the need for an effective prompt optimization method to generate optimized text prompts automatically for text-to-image generative models. Thus, this study proposes a prompt optimization method that uses in-context few-shot learning in a pretrained language model. The proposed approach aims to generate optimized text prompts to guide the image synthesis process by leveraging the available contextual information in a few text examples. The results revealed that synthesized images using the proposed prompt optimization method achieved a higher performance, at 18% on average, based on an evaluation metric that measures the similarity between the generated images and prompts for generation. The significance of this research lies in its potential to provide a more efficient and automated approach to obtaining high-quality synthesized images. The findings indicate that prompt optimization may offer a promising pathway for text-to-image generative models.
- Published
- 2024
- Full Text
- View/download PDF