201. The effects of assessment design on academic dishonesty, learner engagement, and certification rates in MOOCs.
- Author
-
Alexandron, Giora, Wiltrout, Mary Ellen, Berg, Aviram, Gershon, Sa'ar Karp, and Ruipérez‐Valiente, José A.
- Subjects
NATIONAL competency-based educational tests ,ONLINE education ,KRUSKAL-Wallis Test ,STUDENT cheating ,T-test (Statistics) ,DESCRIPTIVE statistics ,RESEARCH funding ,CERTIFICATION ,ALTERNATIVE education - Abstract
Background: Massive Open Online Courses (MOOCs) have touted the idea of democratizing education, but soon enough, this utopian idea collided with the reality of finding sustainable business models. In addition, the promise of harnessing interactive and social web technologies to promote meaningful learning was only partially successful. And finally, studies demonstrated that many learners exploit the anonymity and feedback to earn certificates unethically. Thus, establishing MOOC pedagogical models that balance open access, meaningful learning, and trustworthy assessment remains a challenge that is crucial for the field to achieve its goals. Objectives: This study analysed the influence of an MOOC assessment model, denoted the Competency Exam (CE), on learner engagement, the level of cheating, and certification rates. At its core, this model separates learning from for‐credit assessment, and it was introduced by the MITx Biology course team in 2016. Methods: We applied a learning analytics methodology to the clickstream data of the verified learners (N = 559) from four consecutive runs of an Introductory Biology MOOC offered through edX. The analysis used novel algorithms for measuring the level of cheating and learner engagement, which were developed in the previous studies. Results and Conclusions: On the positive side, the CE model reduced cheating and did not reduce learner engagement with the main learning materials – videos and formative assessment items. On the negative side, it led to procrastination, and certification rates were lower. Implications: First, the results shed light on the fundamental connection between incentive design and learner behaviour. Second, the CE provides MOOC designers with an 'analytically verified' model to reduce cheating without compromising on open access. Third, our methodology provides a novel means for measuring cheating and learner engagement in MOOCs. Lay Description: What is already known about this topic?: Massive open online courses (MOOCs) have touted the idea of democratizing education, but soon enough, this utopian idea collided with the reality of finding sustainable business modelsThe promise of harnessing interactive and social web technologies to promote meaningful learning was only partially successful.Studies demonstrated that many learners exploit the anonymity and feedback to earn certificates unethically.Thus, establishing MOOC pedagogical models that balance open access, meaningful learning, and trustworthy assessment remains a challenge that is crucial for the field to achieve its goals. What this paper adds?: Underlines ways in which incentive‐based design shapes learners' behavior in MOOCs.Presents an 'analytically‐verified' assessment model that is resistant to cheating while still preserving open access and design for learning.Provides a robust learning analytics methodology to evaluate the effect of assessment models on cheating and learning, which can be generalized to evaluate the effect of such interventions. The implications of study findings for practitioners: The results shed light on the fundamental connection between assessment design and learner behavior.The competency exam model provides MOOC designers with an 'analytically‐verified' model to reduce cheating without compromising on open access.The paper provides a novel methodology for measuring cheating and learner engagement in MOOCs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF