Back to Search Start Over

Impact of Large Language Model Assistance on Patients Reading Clinical Notes: A Mixed-Methods Study

Authors :
Mannhardt, Niklas
Bondi-Kelly, Elizabeth
Lam, Barbara
Mozannar, Hussein
O'Connell, Chloe
Asiedu, Mercy
Buendia, Alejandro
Urman, Tatiana
Riaz, Irbaz B.
Ricciardi, Catherine E.
Agrawal, Monica
Ghassemi, Marzyeh
Sontag, David
Publication Year :
2024

Abstract

Large language models (LLMs) have immense potential to make information more accessible, particularly in medicine, where complex medical jargon can hinder patient comprehension of clinical notes. We developed a patient-facing tool using LLMs to make clinical notes more readable by simplifying, extracting information from, and adding context to the notes. We piloted the tool with clinical notes donated by patients with a history of breast cancer and synthetic notes from a clinician. Participants (N=200, healthy, female-identifying patients) were randomly assigned three clinical notes in our tool with varying levels of augmentations and answered quantitative and qualitative questions evaluating their understanding of follow-up actions. Augmentations significantly increased their quantitative understanding scores. In-depth interviews were conducted with participants (N=7, patients with a history of breast cancer), revealing both positive sentiments about the augmentations and concerns about AI. We also performed a qualitative clinician-driven analysis of the model's error modes.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.09637
Document Type :
Working Paper