Back to Search Start Over

Multi-modal Masked Siamese Network Improves Chest X-Ray Representation Learning

Authors :
Shurrab, Saeed
Guerra-Manzanares, Alejandro
Shamout, Farah E.
Publication Year :
2024

Abstract

Self-supervised learning methods for medical images primarily rely on the imaging modality during pretraining. While such approaches deliver promising results, they do not leverage associated patient or scan information collected within Electronic Health Records (EHR). Here, we propose to incorporate EHR data during self-supervised pretraining with a Masked Siamese Network (MSN) to enhance the quality of chest X-ray representations. We investigate three types of EHR data, including demographic, scan metadata, and inpatient stay information. We evaluate our approach on three publicly available chest X-ray datasets, MIMIC-CXR, CheXpert, and NIH-14, using two vision transformer (ViT) backbones, specifically ViT-Tiny and ViT-Small. In assessing the quality of the representations via linear evaluation, our proposed method demonstrates significant improvement compared to vanilla MSN and state-of-the-art self-supervised learning baselines. Our work highlights the potential of EHR-enhanced self-supervised pre-training for medical imaging. The code is publicly available at: https://github.com/nyuad-cai/CXR-EHR-MSN<br />Comment: Under review

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.04449
Document Type :
Working Paper