1. Many labs 5: registered multisite replication of the tempting-fate effects in risen and gilovich (2008)
- Author
-
Kimberly P. Parks, Janos Salamon, Eleanor V. Langford, Dylan Manfredi, Wolf Vanpaemel, David Zealley, Antonia M. Ciunci, Francis Tuerlinckx, Sara Steegen, Grecia Kessinger, Barnabas Szaszi, Christian Nunnally, Kayla Ashbaugh, Maya B. Mathur, Charles R. Ebersole, Bradford J. Wiggins, Rachel L. Shubella, Sebastiaan Pessers, Filipe Falcão, Michael H. Bernstein, Kaylis Hase Rudy, Diane-Jo Bart-Plange, Lynda A. R. Stein, Anna Palinkas, Tiago Ramos, Peter Szecsi, Marton Kovacs, Rúben Silva, Caio Ambrosio Lage, Rias A. Hilliard, Mark Zrubka, Gideon Nave, Samuel Lincoln Bezerra Lins, Michael C. Frank, Alan Jern, Maria Vlachou, Vanessa S. Kolb, Don A. Moore, Venus Meyet, Balazs Aczel, Danielle J. Kellier, and Faculdade de Psicologia e de Ciências da Educação
- Subjects
Open data ,Psychology ,General Psychology ,Replication (computing) ,Magical thinking ,Cognitive psychology - Abstract
Risen and Gilovich (2008) found that subjects believed that “tempting fate” would be punished with ironic bad outcomes (a main effect), and that this effect was magnified when subjects were under cognitive load (an interaction). A previous replication study (Frank & Mathur, 2016) that used an online implementation of the protocol on Amazon Mechanical Turk failed to replicate both the main effect and the interaction. Before this replication was run, the authors of the original study expressed concern that the cognitive-load manipulation may be less effective when implemented online than when implemented in the lab and that subjects recruited online may also respond differently to the specific experimental scenario chosen for the replication. A later, large replication project, Many Labs 2 (Klein et al. 2018), replicated the main effect (though the effect size was smaller than in the original study), but the interaction was not assessed. Attempting to replicate the interaction while addressing the original authors’ concerns regarding the protocol for the first replication study, we developed a new protocol in collaboration with the original authors. We used four university sites ( N = 754) chosen for similarity to the site of the original study to conduct a high-powered, preregistered replication focused primarily on the interaction effect. Results from these sites did not support the interaction or the main effect and were comparable to results obtained at six additional universities that were less similar to the original site. Post hoc analyses did not provide strong evidence for statistical inconsistency between the original study’s estimates and our estimates; that is, the original study’s results would not have been extremely unlikely in the estimated distribution of population effects in our sites. We also collected data from a new Mechanical Turk sample under the first replication study’s protocol, and results were not meaningfully different from those obtained with the new protocol at universities similar to the original site. Secondary analyses failed to support proposed substantive mechanisms for the failure to replicate.
- Published
- 2020