Back to Search Start Over

Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks.

Authors :
Luo, Ling
Ning, Jinzhong
Zhao, Yingwen
Wang, Zhijun
Ding, Zeyuan
Chen, Peng
Fu, Weiru
Han, Qinyu
Xu, Guangtao
Qiu, Yunzhi
Pan, Dinghao
Li, Jiru
Li, Hao
Feng, Wenduo
Tu, Senbo
Liu, Yuqi
Yang, Zhihao
Wang, Jian
Sun, Yuanyuan
Lin, Hongfei
Source :
Journal of the American Medical Informatics Association; Sep2024, Vol. 31 Issue 9, p1865-1874, 10p
Publication Year :
2024

Abstract

Objective Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical natural language processing (NLP) tasks in different languages, we present Taiyi, a bilingual fine-tuned LLM for diverse biomedical NLP tasks. Materials and Methods We first curated a comprehensive collection of 140 existing biomedical text mining datasets (102 English and 38 Chinese datasets) across over 10 task types. Subsequently, these corpora were converted to the instruction data used to fine-tune the general LLM. During the supervised fine-tuning phase, a 2-stage strategy is proposed to optimize the model performance across various tasks. Results Experimental results on 13 test sets, which include named entity recognition, relation extraction, text classification, and question answering tasks, demonstrate that Taiyi achieves superior performance compared to general LLMs. The case study involving additional biomedical NLP tasks further shows Taiyi's considerable potential for bilingual biomedical multitasking. Conclusion Leveraging rich high-quality biomedical corpora and developing effective fine-tuning strategies can significantly improve the performance of LLMs within the biomedical domain. Taiyi shows the bilingual multitasking capability through supervised fine-tuning. However, those tasks such as information extraction that are not generation tasks in nature remain challenging for LLM-based generative approaches, and they still underperform the conventional discriminative approaches using smaller language models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10675027
Volume :
31
Issue :
9
Database :
Complementary Index
Journal :
Journal of the American Medical Informatics Association
Publication Type :
Academic Journal
Accession number :
179262552
Full Text :
https://doi.org/10.1093/jamia/ocae037