Back to Search Start Over

Synth$^2$: Boosting Visual-Language Models with Synthetic Captions and Image Embeddings

Authors :
Sharifzadeh, Sahand
Kaplanis, Christos
Pathak, Shreya
Kumaran, Dharshan
Ilic, Anastasija
Mitrovic, Jovana
Blundell, Charles
Banino, Andrea
Publication Year :
2024

Abstract

The creation of high-quality human-labeled image-caption datasets presents a significant bottleneck in the development of Visual-Language Models (VLMs). In this work, we investigate an approach that leverages the strengths of Large Language Models (LLMs) and image generation models to create synthetic image-text pairs for efficient and effective VLM training. Our method employs a pretrained text-to-image model to synthesize image embeddings from captions generated by an LLM. Despite the text-to-image model and VLM initially being trained on the same data, our approach leverages the image generator's ability to create novel compositions, resulting in synthetic image embeddings that expand beyond the limitations of the original dataset. Extensive experiments demonstrate that our VLM, finetuned on synthetic data achieves comparable performance to models trained solely on human-annotated data, while requiring significantly less data. Furthermore, we perform a set of analyses on captions which reveals that semantic diversity and balance are key aspects for better downstream performance. Finally, we show that synthesizing images in the image embedding space is 25\% faster than in the pixel space. We believe our work not only addresses a significant challenge in VLM training but also opens up promising avenues for the development of self-improving multi-modal models.<br />Comment: 9 pages, 6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.07750
Document Type :
Working Paper