Back to Search Start Over

A vignette-based evaluation of ChatGPT's ability to provide appropriate and equitable medical advice across care contexts.

Authors :
Nastasi, Anthony J.
Courtright, Katherine R.
Halpern, Scott D.
Weissman, Gary E.
Source :
Scientific Reports. 10/19/2023, Vol. 13 Issue 1, p1-6. 6p.
Publication Year :
2023

Abstract

ChatGPT is a large language model trained on text corpora and reinforced with human supervision. Because ChatGPT can provide human-like responses to complex questions, it could become an easily accessible source of medical advice for patients. However, its ability to answer medical questions appropriately and equitably remains unknown. We presented ChatGPT with 96 advice-seeking vignettes that varied across clinical contexts, medical histories, and social characteristics. We analyzed responses for clinical appropriateness by concordance with guidelines, recommendation type, and consideration of social factors. Ninety-three (97%) responses were appropriate and did not explicitly violate clinical guidelines. Recommendations in response to advice-seeking questions were completely absent (N = 34, 35%), general (N = 18, 18%), or specific (N = 44, 46%). 53 (55%) explicitly considered social factors like race or insurance status, which in some cases changed clinical recommendations. ChatGPT consistently provided background information in response to medical questions but did not reliably offer appropriate and personalized medical advice. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20452322
Volume :
13
Issue :
1
Database :
Academic Search Index
Journal :
Scientific Reports
Publication Type :
Academic Journal
Accession number :
173150378
Full Text :
https://doi.org/10.1038/s41598-023-45223-y