Back to Search Start Over

GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI

Authors :
Li, Tianbin
Su, Yanzhou
Li, Wei
Fu, Bin
Chen, Zhe
Huang, Ziyan
Wang, Guoan
Ma, Chenglong
Chen, Ying
Hu, Ming
Li, Yanjun
Chen, Pengcheng
Hu, Xiaowei
Deng, Zhongying
Ji, Yuanfeng
Ye, Jin
Qiao, Yu
He, Junjun
Publication Year :
2024

Abstract

Despite significant advancements in general artificial intelligence, such as GPT-4, their effectiveness in the medical domain (general medical AI, GMAI) remains constrained due to the absence of specialized medical knowledge. To address this challenge, we present GMAI-VL-5.5M, a comprehensive multimodal medical dataset created by converting hundreds of specialized medical datasets into meticulously constructed image-text pairs. This dataset features comprehensive task coverage, diverse modalities, and high-quality image-text data. Building upon this multimodal dataset, we propose GMAI-VL, a general medical vision-language model with a progressively three-stage training strategy. This approach significantly enhances the model's ability by integrating visual and textual information, thereby improving its ability to process multimodal data and support accurate diagnosis and clinical decision-making. Experimental evaluations demonstrate that GMAI-VL achieves state-of-the-art results across a wide range of multimodal medical tasks, such as visual question answering and medical image diagnosis. Our contributions include the development of the GMAI-VL-5.5M dataset, the introduction of the GMAI-VL model, and the establishment of new benchmarks in multiple medical domains. Code and dataset will be released at https://github.com/uni-medical/GMAI-VL.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.14522
Document Type :
Working Paper