Back to Search Start Over

Knowledge-enhanced Multimodal ECG Representation Learning with Arbitrary-Lead Inputs

Authors :
Liu, Che
Ouyang, Cheng
Wan, Zhongwei
Wang, Haozhe
Bai, Wenjia
Arcucci, Rossella
Publication Year :
2025

Abstract

Recent advances in multimodal ECG representation learning center on aligning ECG signals with paired free-text reports. However, suboptimal alignment persists due to the complexity of medical language and the reliance on a full 12-lead setup, which is often unavailable in under-resourced settings. To tackle these issues, we propose **K-MERL**, a knowledge-enhanced multimodal ECG representation learning framework. **K-MERL** leverages large language models to extract structured knowledge from free-text reports and employs a lead-aware ECG encoder with dynamic lead masking to accommodate arbitrary lead inputs. Evaluations on six external ECG datasets show that **K-MERL** achieves state-of-the-art performance in zero-shot classification and linear probing tasks, while delivering an average **16%** AUC improvement over existing methods in partial-lead zero-shot classification.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.17900
Document Type :
Working Paper