1. Cross-validation for the estimation of effect size generalizability in mass-univariate brain-wide association studies
- Author
-
Janik Goltermann, Nils R. Winter, Marius Gruber, Lukas Fisch, Maike Richter, Dominik Grotegerd, Katharina Dohm, Susanne Meinert, Elisabeth J. Leehr, Joscha Böhnlein, Anna Kraus, Katharina Thiel, Alexandra Winter, Kira Flinkenflügel, Ramona Leenings, Carlotta Barkhau, Jan Ernsting, Klaus Berger, Heike Minnerup, Benjamin Straube, Nina Alexander, Hamidreza Jamalabadi, Frederike Stein, Katharina Brosch, Adrian Wroblewski, Florian Thomas-Odenthal, Paula Usemann, Lea Teutenberg, Julia Pfarr, Andreas Jansen, Igor Nenadić, Tilo Kircher, Christian Gaser, Nils Opel, Tim Hahn, and Udo Dannlowski
- Abstract
IntroductionStatistical effect sizes are systematically overestimated in small samples, leading to poor generalizability and replicability of findings in all areas of research. Due to the large number of variables, this is particularly problematic in neuroimaging research. While cross-validation is frequently used in multivariate machine learning approaches to assess model generalizability and replicability, the benefits for mass-univariate brain analysis are yet unclear. We investigated the impact of cross-validation on effect size estimation in univariate voxel-based brain-wide associations, using body mass index (BMI) as an exemplary predictor.MethodsA total of n=3401 adults were pooled from three independent cohorts. Brain-wide associations between BMI and gray matter structure were tested using a standard linear mass-univariate voxel-based approach. First, a traditional non-cross-validated analysis was conducted to identify brain-wide effect sizes in the total sample (as an estimate of a realistic reference effect size). The impact of sample size (bootstrapped samples ranging from n=25 to n=3401) and cross-validation on effect size estimates was investigated across selected voxels with differing underlying effect sizes (including the brain-wide lowest effect size). Linear effects were estimated within training sets and then applied to unseen test set data, using 5-fold cross-validation. Resulting effect sizes (explained variance) were investigated.ResultsAnalysis in the total sample (n=3401) without cross-validation yielded mainly negative correlations between BMI and gray matter density with a maximum effect size ofR2p=.036 (peak voxel in the cerebellum). Effects were overestimated exponentially with decreasing sample size, with effect sizes up toR2p=.535 in samples of n=25 for the voxel with the brain-wide largest effect and up toR2p=.429 for the voxel with the brain-wide smallest effect. When applying cross-validation, linear effects estimated in small samples did not generalize to an independent test set. For the largest brain-wide effect a minimum sample size of n=100 was required to start generalizing (explained variance >0 in unseen data), while n=400 were needed for smaller effects ofR2p=.005 to generalize. For a voxel with an underlying null effect, linear effects found in non-cross-validated samples did not generalize to test sets even with the maximum sample size of n=3401. Effect size estimates obtained with and without cross-validation approached convergence in large samples.DiscussionCross-validation is a useful method to counteract the overestimation of effect size particularly in small samples and to assess the generalizability of effects. Train and test set effect sizes converge in large samples which likely reflects a good generalizability for models in such samples. While linear effects start generalizing to unseen data in samples of n>100 for large effect sizes, the generalization of smaller effects requires larger samples (n>400). Cross-validation should be applied in voxel-based mass-univariate analysis to foster accurate effect size estimation and improve replicability of neuroimaging findings. We provide open-source python code for this purpose (https://osf.io/cy7fp/?view_only=a10fd0ee7b914f50820b5265f65f0cdb).
- Published
- 2023
- Full Text
- View/download PDF