Back to Search Start Over

EyeFound: A Multimodal Generalist Foundation Model for Ophthalmic Imaging

Authors :
Shi, Danli
Zhang, Weiyi
Chen, Xiaolan
Liu, Yexin
Yang, Jiancheng
Huang, Siyu
Tham, Yih Chung
Zheng, Yingfeng
He, Mingguang
Publication Year :
2024

Abstract

Artificial intelligence (AI) is vital in ophthalmology, tackling tasks like diagnosis, classification, and visual question answering (VQA). However, existing AI models in this domain often require extensive annotation and are task-specific, limiting their clinical utility. While recent developments have brought about foundation models for ophthalmology, they are limited by the need to train separate weights for each imaging modality, preventing a comprehensive representation of multi-modal features. This highlights the need for versatile foundation models capable of handling various tasks and modalities in ophthalmology. To address this gap, we present EyeFound, a multimodal foundation model for ophthalmic images. Unlike existing models, EyeFound learns generalizable representations from unlabeled multimodal retinal images, enabling efficient model adaptation across multiple applications. Trained on 2.78 million images from 227 hospitals across 11 ophthalmic modalities, EyeFound facilitates generalist representations and diverse multimodal downstream tasks, even for detecting challenging rare diseases. It outperforms previous work RETFound in diagnosing eye diseases, predicting systemic disease incidents, and zero-shot multimodal VQA. EyeFound provides a generalizable solution to improve model performance and lessen the annotation burden on experts, facilitating widespread clinical AI applications for retinal imaging.<br />Comment: 21 pages, 2 figures, 4 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.11338
Document Type :
Working Paper