Back to Search Start Over

Assessing the Reproducibility of the Structured Abstracts Generated by ChatGPT and Bard Compared to Human-Written Abstracts in the Field of Spine Surgery: Comparative Analysis

Authors :
Hong Jin Kim
Jae Hyuk Yang
Dong-Gune Chang
Lawrence G Lenke
Javier Pizones
René Castelein
Kota Watanabe
Per D Trobisch
Gregory M Mundis Jr
Seung Woo Suh
Se-Il Suk
Source :
Journal of Medical Internet Research, Vol 26, p e52001 (2024)
Publication Year :
2024
Publisher :
JMIR Publications, 2024.

Abstract

BackgroundDue to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as “Gemini”; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy. ObjectiveThe objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery. MethodsIn total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors. ResultsThe proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P

Details

Language :
English
ISSN :
14388871
Volume :
26
Database :
Directory of Open Access Journals
Journal :
Journal of Medical Internet Research
Publication Type :
Academic Journal
Accession number :
edsdoj.30405fa90785424883094946bc5779af
Document Type :
article
Full Text :
https://doi.org/10.2196/52001