1. The Need for a Recurring Large-Scale Benchmarking Survey to Continually Evaluate Sampling Methods and Administration Modes: Lessons from the 2022 Collaborative Midterm Survey
- Author
-
Enns, Peter K., Barry, Colleen L., Druckman, James N., Garcia-Rios, Sergio, Wilson, David C., and Schuldt, Jonathon P.
- Subjects
Statistics - Other Statistics - Abstract
As survey methods adapt to technological and societal changes, a growing body of research seeks to understand the tradeoffs associated with various sampling methods and administration modes. We show how the NSF-funded 2022 Collaborative Midterm Survey (CMS) can be used as a dynamic and transparent framework for evaluating which sampling approaches - or combination of approaches - are best suited for various research goals. The CMS is ideally suited for this purpose because it includes almost 20,000 respondents interviewed using two administration modes (phone and online) and data drawn from random digit dialing, random address-based sampling, a probability-based panel, two nonprobability panels, and two nonprobability marketplaces. The analysis considers three types of population benchmarks (election data, administrative records, and large government surveys) and focuses on the national-level estimates as well as oversamples in three states (California, Florida, and Wisconsin). In addition to documenting how each of the survey strategies performed, we develop a strategy to assess how different combinations of approaches compare to different population benchmarks in order to guide researchers combining sampling methods and sources. We conclude by providing specific recommendations to public opinion and election survey researchers and demonstrating how our approach could be applied to a large government survey conducted at regular intervals to provide ongoing guidance to researchers, government, businesses, and nonprofits regarding the most appropriate survey sampling and administration methods.
- Published
- 2024