Back to Search
Start Over
Hidding the Ghostwriters: An Adversarial Evaluation of AI-Generated Student Essay Detection
- Publication Year :
- 2024
-
Abstract
- Large language models (LLMs) have exhibited remarkable capabilities in text generation tasks. However, the utilization of these models carries inherent risks, including but not limited to plagiarism, the dissemination of fake news, and issues in educational exercises. Although several detectors have been proposed to address these concerns, their effectiveness against adversarial perturbations, specifically in the context of student essay writing, remains largely unexplored. This paper aims to bridge this gap by constructing AIG-ASAP, an AI-generated student essay dataset, employing a range of text perturbation methods that are expected to generate high-quality essays while evading detection. Through empirical experiments, we assess the performance of current AIGC detectors on the AIG-ASAP dataset. The results reveal that the existing detectors can be easily circumvented using straightforward automatic adversarial attacks. Specifically, we explore word substitution and sentence substitution perturbation methods that effectively evade detection while maintaining the quality of the generated essays. This highlights the urgent need for more accurate and robust methods to detect AI-generated student essays in the education domain.<br />Comment: Accepted by EMNLP 2023 Main conference, Oral Presentation
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2402.00412
- Document Type :
- Working Paper