1. GPTZero vs. Text Tampering: The Battle That GPTZero Wins
- Author
-
David W. Brown and Dean Jensen
- Abstract
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may be at risk. The bot's ability to quickly create high quality papers, sometimes complete with reference material, has led to concern that these programs will make students too reliant on their ability and not develop the critical thinking skills necessary to succeed. The rise in these applications has led to the need for the development of detection programs that are able to read the students submitted work and return an accurate estimation of if the paper is human or computer created. These detection programs use natural language processing's (NLP) ideas of perplexity, or randomness of the text, and burstiness, or the tendency for certain words and phrases to appear together, plus sophisticated algorithms to compare the essays to preexisting literature to generate an accurate estimation on the likely author of the paper. The use of these systems has been found to be highly effective in reducing plagiarism among students, however concerns have been raised about the limitations of these systems. False positives, false negatives, and cross language identification are three areas of concern amongst faculty and have led to reduced usage of the detection engines. Despite the limitations however, these systems are a valuable tool for educational institutions to maintain academic integrity and ensure that students are submitting original work. [For the full proceedings, see ED656038.]
- Published
- 2023