1. Less but Enough: Evaluation of Peer Reviews through Pseudo-Labeling with Less Annotated Data
- Author
-
Liu, Chengyuan, Doshi, Divyang, Shang, Ruixuan, Cui, Jialin, Jia, Qinjin, and Gehringer, Edward
- Abstract
A peer-assessment system provides a structured learning process for students and allows them to write textual feedback on each other's assignments and projects. This helps instructors or teaching assistants perform a more comprehensive evaluation of students' work. However, the contribution of peer assessment to students' learning relies heavily on the quality of the review. Therefore, a thorough evaluation of the quality of peer assessment is essential to assuring that the process will benefit students' learning. Previous studies have focused on applying machine learning to evaluate peer assessment by identifying characteristics of reviews (e.g., Do they mention a problem, make a suggestion, or tell the students where to make a change?). Unfortunately, collecting ground-truth labels of the characteristics is an arbitrary, subjective, and labor-intensive task. Besides in most cases, those labels are assigned by students, not all of whom are reliable as a source of labeling. In this study, we propose a semi-supervised pseudo-labeling approach to build a robust peer assessment evaluation system to utilize large unlabeled datasets along with only a small amount of labeled data. We aim to evaluate the peer assessment from two angles: Detect a problem statement (Does the reviewer mention a problem with the work?) and suggestion (Does the reviewer give a suggestion to the author?)
- Published
- 2023