1. Handling Missingness, Failures, and Non-Convergence in Simulation Studies: A Review of Current Practices and Recommendations
- Author
-
Pawel, Samuel, Bartoš, František, Siepe, Björn S., and Lohmann, Anna
- Subjects
Statistics - Methodology - Abstract
Simulation studies are commonly used in methodological research for the empirical evaluation of data analysis methods. They generate artificial data sets under specified mechanisms and compare the performance of methods across conditions. However, simulation repetitions do not always produce valid outputs, e.g., due to non-convergence or other algorithmic failures. This phenomenon complicates the interpretation of results, especially when its occurrence differs between methods and conditions. Despite the potentially serious consequences of such "missingness", quantitative data on its prevalence and specific guidance on how to deal with it are currently limited. To this end, we reviewed 482 simulation studies published in various methodological journals and systematically assessed the prevalence and handling of missingness. We found that only 23.0% (111/482) of the reviewed simulation studies mention missingness, with even fewer reporting frequency (92/482 = 19.1%) or how it was handled (67/482 = 13.9%). We propose a classification of missingness and possible solutions. We give various recommendations, most notably to always quantify and report missingness, even if none was observed, to align missingness handling with study goals, and to share code and data for reproduction and reanalysis. Using a case study on publication bias adjustment methods, we illustrate common pitfalls and solutions.
- Published
- 2024