Back to Search Start Over

Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data?

Authors :
Liu, Che
Wan, Zhongwei
Wang, Haozhe
Chen, Yinda
Qaiser, Talha
Jin, Chen
Yousefi, Fariba
Burlutskiy, Nikolay
Arcucci, Rossella
Publication Year :
2024

Abstract

Medical Vision-Language Pre-training (MedVLP) has made significant progress in enabling zero-shot tasks for medical image understanding. However, training MedVLP models typically requires large-scale datasets with paired, high-quality image-text data, which are scarce in the medical domain. Recent advancements in Large Language Models (LLMs) and diffusion models have made it possible to generate large-scale synthetic image-text pairs. This raises the question: "Can MedVLP succeed using purely synthetic data?" To address this, we use off-the-shelf generative models to create synthetic radiology reports and paired Chest X-ray (CXR) images, and propose an automated pipeline to build a diverse, high-quality synthetic dataset, enabling a rigorous study that isolates model and training settings, focusing entirely from the data perspective. Our results show that MedVLP models trained exclusively on synthetic data outperform those trained on real data by 3.8% in averaged AUC on zero-shot classification. Moreover, using a combination of synthetic and real data leads to a further improvement of 9.07%. Additionally, MedVLP models trained on synthetic or mixed data consistently outperform those trained on real data in zero-shot grounding, as well as in fine-tuned classification and segmentation tasks. Our analysis suggests MedVLP trained on well-designed synthetic data can outperform models trained on real datasets, which may be limited by low-quality samples and long-tailed distributions.<br />Comment: Under Review

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.13523
Document Type :
Working Paper