Back to Search Start Over

Improving Readability and Automating Content Analysis of Plastic Surgery Webpages With ChatGPT.

Authors :
Fanning JE
Escobar-Domingo MJ
Foppiani J
Lee D
Miller AS
Janis JE
Lee BT
Source :
The Journal of surgical research [J Surg Res] 2024 Jul; Vol. 299, pp. 103-111. Date of Electronic Publication: 2024 May 14.
Publication Year :
2024

Abstract

Introduction: The quality and readability of online health information are sometimes suboptimal, reducing their usefulness to patients. Manual evaluation of online medical information is time-consuming and error-prone. This study automates content analysis and readability improvement of private-practice plastic surgery webpages using ChatGPT.<br />Methods: The first 70 Google search results of "breast implant size factors" and "breast implant size decision" were screened. ChatGPT 3.5 and 4.0 were utilized with two prompts (1: general, 2: specific) to automate content analysis and rewrite webpages with improved readability. ChatGPT content analysis outputs were classified as hallucination (false positive), accurate (true positive or true negative), or omission (false negative) using human-rated scores as a benchmark. Six readability metric scores of original and revised webpage texts were compared.<br />Results: Seventy-five webpages were included. Significant improvements were achieved from baseline in six readability metric scores using a specific-instruction prompt with ChatGPT 3.5 (all P ≤ 0.05). No further improvements in readability scores were achieved with ChatGPT 4.0. Rates of hallucination, accuracy, and omission in ChatGPT content scoring varied widely between decision-making factors. Compared to ChatGPT 3.5, average accuracy rates increased while omission rates decreased with ChatGPT 4.0 content analysis output.<br />Conclusions: ChatGPT offers an innovative approach to enhancing the quality of online medical information and expanding the capabilities of plastic surgery research and practice. Automation of content analysis is limited by ChatGPT 3.5's high omission rates and ChatGPT 4.0's high hallucination rates. Our results also underscore the importance of iterative prompt design to optimize ChatGPT performance in research tasks.<br /> (Copyright © 2024 Elsevier Inc. All rights reserved.)

Details

Language :
English
ISSN :
1095-8673
Volume :
299
Database :
MEDLINE
Journal :
The Journal of surgical research
Publication Type :
Academic Journal
Accession number :
38749313
Full Text :
https://doi.org/10.1016/j.jss.2024.04.006