1. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses
- Author
-
Ibrahim, Hazem, Liu, Fengyuan, Asim, Rohail, Battu, Balaraju, Benabderrahmane, Sidahmed, Alhafni, Bashar, Adnan, Wifag, Alhanai, Tuka, AlShebli, Bedoor, Baghdadi, Riyadh, Bélanger, Jocelyn J., Beretta, Elena, Celik, Kemal, Chaqfeh, Moumena, Daqaq, Mohammed F., Bernoussi, Zaynab El, Fougnie, Daryl, de Soto, Borja Garcia, Gandolfi, Alberto, Gyorgy, Andras, Habash, Nizar, Harris, J. Andrew, Kaufman, Aaron, Kirousis, Lefteris, Kocak, Korhan, Lee, Kangsan, Lee, Seungah S., Malik, Samreen, Maniatakos, Michail, Melcher, David, Mourad, Azzam, Park, Minsu, Rasras, Mahmoud, Reuben, Alicja, Zantout, Dania, Gleason, Nancy W., Makovi, Kinga, Rahwan, Talal, and Zaki, Yasir
- Subjects
FOS: Computer and information sciences ,Computer Science - Computers and Society ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computers and Society (cs.CY) - Abstract
The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work -- a possibility that has sparked discussions on the integrity of student evaluations in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses. Further, students' perspectives regarding the use of such tools, and educators' perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of ChatGPT against students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a survey across five countries, as well as a more in-depth survey at the authors' institution, to discern students' and educators' perceptions of ChatGPT's use. We find that ChatGPT's performance is comparable, if not superior, to that of students in many courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT's use in school work, due to their propensity to classify human-written answers as AI-generated, as well as the ease with which AI-generated text can be edited to evade detection. Finally, we find an emerging consensus among students to use the tool, and among educators to treat this as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of AI into educational frameworks., 17 pages, 4 figures
- Published
- 2023