Back to Search Start Over

Evaluating Large Language Models for Public Health Classification and Extraction Tasks

Authors :
Harris, Joshua
Laurence, Timothy
Loman, Leo
Grayson, Fan
Nonnenmacher, Toby
Long, Harry
WalsGriffith, Loes
Douglas, Amy
Fountain, Holly
Georgiou, Stelios
Hardstaff, Jo
Hopkins, Kathryn
Chi, Y-Ling
Kuyumdzhieva, Galena
Larkin, Lesley
Collins, Samuel
Mohammed, Hamish
Finnie, Thomas
Hounsome, Luke
Riley, Steven
Publication Year :
2024

Abstract

Advances in Large Language Models (LLMs) have led to significant interest in their potential to support human experts across a range of domains, including public health. In this work we present automated evaluations of LLMs for public health tasks involving the classification and extraction of free text. We combine six externally annotated datasets with seven new internally annotated datasets to evaluate LLMs for processing text related to: health burden, epidemiological risk factors, and public health interventions. We initially evaluate five open-weight LLMs (7-70 billion parameters) across all tasks using zero-shot in-context learning. We find that Llama-3-70B-Instruct is the highest performing model, achieving the best results on 15/17 tasks (using micro-F1 scores). We see significant variation across tasks with all open-weight LLMs scoring below 60% micro-F1 on some challenging tasks, such as Contact Classification, while all LLMs achieve greater than 80% micro-F1 on others, such as GI Illness Classification. For a subset of 12 tasks, we also evaluate GPT-4 and find comparable results to Llama-3-70B-Instruct, which scores equally or outperforms GPT-4 on 6 of the 12 tasks. Overall, based on these initial results we find promising signs that LLMs may be useful tools for public health experts to extract information from a wide variety of free text sources, and support public health surveillance, research, and interventions.<br />Comment: 33 pages. Feedback and comments are highly appreciated

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.14766
Document Type :
Working Paper