1. Comparing Expert and ChatGPT-authored Guidance Prompts
- Author
-
Bradford, Allison, Li, Weiying, Gerard, Libby, and Linn, Marcia C
- Subjects
Curriculum and Pedagogy ,Information and Computing Sciences ,Education ,Specialist Studies In Education - Abstract
Students bring a multitude of ideas and experiences to the classroom while they are reasoning about scientific phenomena. They often need timely guidance to refine build upon their initial ideas. In this study we explore the development of guidance prompts to provide students with personalized, real-time feedback in the context of a pedagogically grounded chatbot. In the current version of the tool, guidance prompts are authored by learning scientists who are experts in the content of the items and in Knowledge Integration pedagogy. When students engage with the chatbot, an idea detection model is used to determine the ideas that are present in a student explanation and then the expert-authored guidance prompts are assigned based on rules about which ideas are or are not present in the student explanation. While this approach allows for close attention to and control of the pedagogical intent of each prompt, it is time consuming and not easily generalizable. Further this rule-based approach limits the ways in which students can interact with the chatbot. The work in progress study presented in this paper explores the potential of using generative AI to create similarly pedagogically grounded guidance prompts as a first step towards increasing the generalizability and scalability of this approach. Specifically, we ask: using criteria from the Knowledge Integration Pedagogical Framework, how do ChatGPT 3.5-authored guidance prompts compare to human expert-authored guidance prompts? We find that while prompt engineering can enhance the alignment of ChatGPT-authored guidance prompts with pedagogical criteria, the human expert-authored guidance prompts more consistently meet the pedagogical criteria.
- Published
- 2024