1. An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes
- Author
-
Richard K Nakamura, Lee S Mann, Mark D Lindner, Jeremy Braithwaite, Mei-Ching Chen, Adrian Vancea, Noni Byrnes, Valerie Durrant, and Bruce Reed
- Subjects
peer review ,racial disparities ,racial bias ,science funding ,halo effects ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Background: Blinding reviewers to applicant identity has been proposed to reduce bias in peer review. Methods: This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White investigators. Applications were reviewed by mail in standard and redacted formats. Results: Redaction reduced, but did not eliminate, reviewers’ ability to correctly guess features of identity. The primary, preregistered analysis hypothesized a differential effect of redaction according to investigator race in the matched applications. A set of secondary analyses (not preregistered) used the randomly selected applications from White scientists and tested the same interaction. Both analyses revealed similar effects: Standard format applications from White investigators scored better than those from Black investigators. Redaction cut the size of the difference by about half (e.g. from a Cohen’s d of 0.20–0.10 in matched applications); redaction caused applications from White scientists to score worse but had no effect on scores for Black applications. Conclusions: Grant-writing considerations and halo effects are discussed as competing explanations for this pattern. The findings support further evaluation of peer review models that diminish the influence of applicant identity. Funding: Funding was provided by the NIH.
- Published
- 2021
- Full Text
- View/download PDF