Introduction While publications of fMRI studies have flourished, it is increasingly recognized that progress in understanding human brain function will require integration of data across studies using meta-analyses. In general, results that do not reach statistical significance are less likely to be published and included in a meta-analysis. Meta-analyses of fMRI studies are prone to this publication bias when studies are excluded because they fail to show activation in specific regions. Further, some studies only report a limited amount of peak voxels that survive a statistical threshold resulting in an enormous loss of data. Coordinate-based toolboxes have been specifically developed to combine the available information of such studies in a meta-analysis. Potential publication bias then stems from two sources: exclusion of studies and missing voxel information within studies. In this study, we focus on the assessment of the first source of bias in coordinate-based meta-analyses. A measure of publication bias indicates the degree to which the analysis might be distorted and helps to interpret results. We propose an adaptation of the Fail-Safe N (FSN; Rosenthal, 1979). The FSN reflects the number of null studies, i.e. studies without activation in a target region, that can be added to an existing meta-analysis without altering the result for the target region. A large FSN indicates robustness of the effect against publication bias. On the other hand, in this context, a FSN that is too large indicates that a small amount of studies might drive the entire analysis. Method We simulated 1000 simplistic meta-analyses, each consisting of 3 studies with real activation in a target area (quadrant 1 in Figure 1) and up to 100 null studies with activation in the remaining 3 quadrants. We calculated the FSN as the number of null studies (with a maximum of 100) that can be added to the original meta-analysis of 3 studies without altering the results for the target area. Meta-analyses were conducted with ALE (Eickhoff et al., 2009; 2012; Turkeltaub et al., 2012). We computed the FSN using an uncorrected threshold (α = 0.001) and 2 versions of a False Discovery Rate (FDR) threshold (q = 0.05), FDR pID (which assumes independence or positive dependence between test statistics) and FDR pN (which makes no assumptions and is more conservative). We varied the average sample size n of the individual studies, from small (n≈10), to medium (n≈20) and large (n≈30). Results Results are summarised in Figure 2 and visually presented in Figure 3. We find a large difference in average FSN between the different thresholding methods. In case of uncorrected thresholding, the target region remains labeled as active while only 3% of the studies in the meta-analysis report activation at that location. Further, if the sample size of the individual studies in the meta-analysis increases, the FSN decreases. Conclusions The FSN varies largely across thresholding methods and sample sizes. Uncorrected thresholding allows for the analysis to be driven by a small amount of studies and is therefore counter-indicated. While a decreasing FSN with increasing sample size might be counterintuitive in terms of robustness, it indicates that the analysis is less prone to be driven by a small number of studies. Publication bias assessment methods can be a valuable add-on to existing toolboxes for interpretation of meta-analytic results. In future work, we will extend our research to other methods for the assessment of publication bias, such as the Egger Test (Egger et al., 1997) and test for excess of success (Francis, 2014). References Egger, M., Davey Smith, G., Schneider, M., and Minder, C. (1997), ‘Bias in meta-analysis detected by a simple, graphical test’, British Medical Journal, vol. 315, pp. 629-634. Eickhoff, S.B., Laird, A.R., Grefkes, C., Wang, L.E., Zilles, K., and Fox, P.T. (2009), ‘Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty’, Human Brain Mapping, vol. 30, pp. 2907-2926. Eickhoff, S.B., Bzdok, D., Laird, A.R., Kurth, F., and Fox, P.T. (2012), ‘Activation likelihood estimation revisited’, Neuroimage, vol. 59, pp. 2349-2361. Francis, G. (2014), ‘The frequency of excess success for articles in Psychological Science’, Psychonomic Bulletin and Review, vol. 21, no. 5, pp. 1180-1187. Rosenthal, R. (1979), ‘The file drawer problem and tolerance for null results’, Psychological Bulletin, vol. 86, no. 3, pp. 638–641. Turkeltaub, P.E., Eickhoff, S.B., Laird, A.R., Fox, M., Wiener, M., and Fox, P. (2012), ‘Minimizing within-experiment and within-group effects in activation likelihood estimation meta-analyses’, Human Brain Mapping, vol. 33, pp. 1-13.