Back to Search Start Over

Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills.

Authors :
Kendall, Graham
Teixeira da Silva, Jaime A.
Source :
Learned Publishing. Jan2024, Vol. 37 Issue 1, p55-62. 8p.
Publication Year :
2024

Abstract

Key points: Academia is already witnessing the abuse of authorship in papers with text generated by large language models (LLMs) such as ChatGPT.LLM‐generated text is testing the limits of publishing ethics as we traditionally know it.We alert the community to imminent risks of LLM technologies, like ChatGPT, for amplifying the predatory publishing 'industry'.The abuse of ChatGPT for the paper mill industry cannot be over‐emphasized.Detection of LLM‐generated text is the responsibility of editors and journals/publishers. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09531513
Volume :
37
Issue :
1
Database :
Academic Search Index
Journal :
Learned Publishing
Publication Type :
Academic Journal
Accession number :
175056609
Full Text :
https://doi.org/10.1002/leap.1578