Back to Search Start Over

Human Computation

Authors :
David G. Hicks
Nai-Ching Wang
Kurt Luther
Paul Quigley
Computer Science
History
School of Education
Source :
Human Computation. 6:147-175
Publication Year :
2019
Publisher :
Human Computation Institute, 2019.

Abstract

Historians spend significant time looking for relevant, high-quality primary sources in digitized archives and through web searches. One reason this task is time-consuming is that historians’ research interests are often highly abstract and specialized. These topics are unlikely to be manually indexed and are difficult to identify with automated text analysis techniques. In this article, we investigate the potential of a new crowdsourcing model in which the historian delegates to a novice crowd the task of labeling the relevance of primary sources with respect to her unique research interests. The model employs a novel crowd workflow, Read-Agree-Predict (RAP), that allows novice crowd workers to label relevance as well as expert historians. As a useful byproduct, RAP also reveals and prioritizes crowd confusions as targeted learning opportunities. We demonstrate the value of our model with two experiments with paid crowd workers (n=170), with the future goal of extending our work to classroom students and public history interventions. We also discuss broader implications for historical research and education. This research was supported by U.S. National Historical Publications and Records Commission Grant DH50013-15. Published version

Details

ISSN :
23308001
Volume :
6
Database :
OpenAIRE
Journal :
Human Computation
Accession number :
edsair.doi.dedup.....392602e37617354ce381fef897a79a18
Full Text :
https://doi.org/10.15346/hc.v6i1.8