Back to Search Start Over

A Refer-and-Ground Multimodal Large Language Model for Biomedicine

Authors :
Huang, Xiaoshuang
Huang, Haifeng
Shen, Lingdong
Yang, Yehui
Shang, Fangxin
Liu, Junwei
Liu, Jia
Publication Year :
2024

Abstract

With the rapid development of multimodal large language models (MLLMs), especially their capabilities in visual chat through refer and ground functionalities, their significance is increasingly recognized. However, the biomedical field currently exhibits a substantial gap in this area, primarily due to the absence of a dedicated refer and ground dataset for biomedical images. To address this challenge, we devised the Med-GRIT-270k dataset. It comprises 270k question-and-answer pairs and spans eight distinct medical imaging modalities. Most importantly, it is the first dedicated to the biomedical domain and integrating refer and ground conversations. The key idea is to sample large-scale biomedical image-mask pairs from medical segmentation datasets and generate instruction datasets from text using chatGPT. Additionally, we introduce a Refer-and-Ground Multimodal Large Language Model for Biomedicine (BiRD) by using this dataset and multi-task instruction learning. Extensive experiments have corroborated the efficacy of the Med-GRIT-270k dataset and the multi-modal, fine-grained interactive capabilities of the BiRD model. This holds significant reference value for the exploration and development of intelligent biomedical assistants.<br />Comment: Accepted by MICCAI2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.18146
Document Type :
Working Paper