1. A Continued Pretrained LLM Approach for Automatic Medical Note Generation
- Author
-
Yuan, Dong, Rastogi, Eti, Naik, Gautam, Rajagopal, Sree Prasanna, Goyal, Sagar, Zhao, Fen, Chintagunta, Bharath, and Ward, Jeff
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4\%. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness., Comment: Accepted to NAACL 2024
- Published
- 2024