Back to Search Start Over

Multimodal LLMs for health grounded in individual-specific data

Authors :
Belyaeva, Anastasiya
Cosentino, Justin
Hormozdiari, Farhad
Eswaran, Krish
Shetty, Shravya
Corrado, Greg
Carroll, Andrew
McLean, Cory Y.
Furlotte, Nicholas A.
Publication Year :
2023

Abstract

Foundation large language models (LLMs) have shown an impressive ability to solve tasks across a wide range of fields including health. To effectively solve personalized health tasks, LLMs need the ability to ingest a diversity of data modalities that are relevant to an individual's health status. In this paper, we take a step towards creating multimodal LLMs for health that are grounded in individual-specific data by developing a framework (HeLM: Health Large Language Model for Multimodal Understanding) that enables LLMs to use high-dimensional clinical modalities to estimate underlying disease risk. HeLM encodes complex data modalities by learning an encoder that maps them into the LLM's token embedding space and for simple modalities like tabular data by serializing the data into text. Using data from the UK Biobank, we show that HeLM can effectively use demographic and clinical features in addition to high-dimensional time-series data to estimate disease risk. For example, HeLM achieves an AUROC of 0.75 for asthma prediction when combining tabular and spirogram data modalities compared with 0.49 when only using tabular data. Overall, we find that HeLM outperforms or performs at parity with classical machine learning approaches across a selection of eight binary traits. Furthermore, we investigate the downstream uses of this model such as its generalizability to out-of-distribution traits and its ability to power conversations around individual health and wellness.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.09018
Document Type :
Working Paper