Back to Search Start Over

Does the human professor or artificial intelligence (AI) offer better explanations to students? Evidence from three within-subject experiments.

Authors :
Chiasson, Rebekah M.
Goodboy, Alan K.
Vendemia, Megan A.
Beer, Nathaniel
Meisz, Gracyn C.
Cooper, Laken
Arnold, Alyssa
Lincoski, Austin
George, William
Zuckerman, Cole
Schrout, Jessica
Source :
Communication Education. Oct2024, p1-28. 28p. 2 Illustrations, 7 Charts.
Publication Year :
2024

Abstract

Three within-subject experiments were conducted by providing students with answers to content questions across different subject matters (a definition, explanation, and example) offered by a human professor (subject-matter expert) versus generative artificial intelligence (ChatGPT). In a randomized order, students read both the expert and ChatGPT’s responses (both were de-identified and declared to be “professors,” so students were not aware one was artificial intelligence), rated both explanations on teaching clarity and competence, and then reported on their affect toward the content and situational interest. Study 1 (interpersonal communication content) revealed no significant differences in repeated measure ratings comparing the expert versus ChatGPT. However, in Study 2 (business communication content) and Study 3 (instructional communication content), compared with the expert, ChatGPT (impersonating a professor) was rated by the same students as higher in teaching clarity and competence, and it generated more student affect and situational interest. In Study 2 and Study 3, a within-subjects mediation analysis revealed that ChatGPT generated more student affect toward the content through the clarity in responses it provided to students. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
03634523
Database :
Academic Search Index
Journal :
Communication Education
Publication Type :
Academic Journal
Accession number :
179627872
Full Text :
https://doi.org/10.1080/03634523.2024.2398105