Back to Search Start Over

MTurk ‘Unscrubbed’: Exploring the good, the ‘Super’, and the unreliable on Amazon’s Mechanical Turk

Authors :
Jeanette A.M.J. Deetlefs
Mathew Chylinski
Andreas Ortmann
Publication Year :
2015

Abstract

Widely accepted as a low-cost, fast-turnaround solution with acceptable validity, Amazon’s Mechanical Turk (MTurk) is increasingly being used to source participants for academic studies (Berinsky et al. 2012; Bohannon 2011; Chandler et al. 2014; Mason and Suri 2012). Yet two commonly raised concerns remain: the presence of quasi-professional respondents, or “Super-Turkers”, and the presence of “Spammers”, those that compromise quality while optimising their pay rate. We isolate the influence on research results of experienced subjects (Super-Turkers), and of unreliable subjects (Spammers), jointly and separately. Jointly including these subjects produces very similar results to jointly excluding them, yet effect sizes decrease disproportionately to their sample representation. Furthermore, separately including experienced subjects in research results is shown to be as problematic as inclusion of unreliable subjects, although the noise introduced by these subjects is divergent and measure dependent. Hence removing only one of these types of respondents can be even more damaging to the reliability of results, than including both.

Details

Database :
OpenAIRE
Accession number :
edsair.od.......645..1f10ae09772e8209a9a11d277c315156