Back to Search Start Over

Benchmarking Vision-Language Contrastive Methods for Medical Representation Learning

Authors :
Roy, Shuvendu
Parhizkar, Yasaman
Ogidi, Franklin
Khazaie, Vahid Reza
Colacci, Michael
Etemad, Ali
Dolatabadi, Elham
Afkanpour, Arash
Publication Year :
2024

Abstract

We perform a comprehensive benchmarking of contrastive frameworks for learning multimodal representations in the medical domain. Through this study, we aim to answer the following research questions: (i) How transferable are general-domain representations to the medical domain? (ii) Is multimodal contrastive training sufficient, or does it benefit from unimodal training as well? (iii) What is the impact of feature granularity on the effectiveness of multimodal medical representation learning? To answer these questions, we investigate eight contrastive learning approaches under identical training setups, and train them on 2.8 million image-text pairs from four datasets, and evaluate them on 25 downstream tasks, including classification (zero-shot and linear probing), image-to-text and text-to-image retrieval, and visual question-answering. Our findings suggest a positive answer to the first question, a negative answer to the second question, and the benefit of learning fine-grained features. Finally, we make our code publicly available.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.07450
Document Type :
Working Paper