Back to Search Start Over

Unified Medical Image Pre-training in Language-Guided Common Semantic Space

Authors :
He, Xiaoxuan
Yang, Yifan
Jiang, Xinyang
Luo, Xufang
Hu, Haoji
Zhao, Siyun
Li, Dongsheng
Yang, Yuqing
Qiu, Lili
Publication Year :
2023

Abstract

Vision-Language Pre-training (VLP) has shown the merits of analysing medical images, by leveraging the semantic congruence between medical images and their corresponding reports. It efficiently learns visual representations, which in turn facilitates enhanced analysis and interpretation of intricate imaging data. However, such observation is predominantly justified on single-modality data (mostly 2D images like X-rays), adapting VLP to learning unified representations for medical images in real scenario remains an open challenge. This arises from medical images often encompass a variety of modalities, especially modalities with different various number of dimensions (e.g., 3D images like Computed Tomography). To overcome the aforementioned challenges, we propose an Unified Medical Image Pre-training framework, namely UniMedI, which utilizes diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images (especially for 2D and 3D images). Under the text's guidance, we effectively uncover visual modality information, identifying the affected areas in 2D X-rays and slices containing lesion in sophisticated 3D CT scans, ultimately enhancing the consistency across various medical imaging modalities. To demonstrate the effectiveness and versatility of UniMedI, we evaluate its performance on both 2D and 3D images across 10 different datasets, covering a wide range of medical image tasks such as classification, segmentation, and retrieval. UniMedI has demonstrated superior performance in downstream tasks, showcasing its effectiveness in establishing a universal medical visual representation.<br />Comment: arXiv admin note: text overlap with arXiv:2210.06044 by other authors

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.14851
Document Type :
Working Paper