We thoroughly respect the recent contributions of Jennions and Moller to the appropriate use of meta-analysis in Ecology and Evolution (Moller and Jennions 2001; Jennions and Moller 2002a, b). Therefore, we are surprised to have to point out the ‘wood’ among the ‘trees’ of their own data. Our point (Kotiaho and Tomkins 2002) was simply to raise the question: given that about 8.6% of studies report non-significant effects for the main hypothesis (Csada et al. 1996), can meta-analysis fail to find a significant overall effect size? Our conclusion was that where publication bias existed, it could not. Here we reply to the comments in Jennions et al.’s (this volume) ‘reply’ and support our original claim with evidence from a thorough empirical investigation of this subject by two of these authors (Jennions and Moller 2002a). We illustrated our point with a hypothetical example in which a new hypothesis was proposed but which was untrue (effect size zero). This hypothetical hypothesis nevertheless attracted ten publications. We calculated that, with a publication bias causing nine out of 10 publications to be significant, that, all things being equal (i.e. methodology and sample size), there would be a significant overall effect size. Jennions et al. (this volume) make two points concerning the validity of our example. The first point is that if the hypothetical true effect size really was zero, then there would be significant positive as well as significant negative effect sizes that were published among the nine significant studies. This would of course tend to render the overall effect size calculated as non-significant, rather than significant as we calculated by assuming all of the significant results fell in the same direction. Our assumption was not without foundation however, as publication bias not only manifests itself as a bias towards significant results but also and importantly, as a bias towards intuiti e results. This has been demonstrated a number of times recently in the form of paradigm shifts and publication biases in Ecology and Evolution (Alatalo et al. 1997, Palmer 1999, 2000, Simmons et al. 1999, Poulin 2000, Jennions and Moller 2002a). Indeed, a directional bias is used as an indication of publication bias when examining funnel plots (Light and Pillemer 1984, Begg 1994). Jennions et al.’s (this volume) second point is that unlike in our hypothetical study, all things are not equal, and that in reality, non-significant studies that are published are likely to have larger sample sizes than the published significant ones. This relationship between significance and sample size arises because publication bias favours the publication of small studies that are significant over small studies that are nonsignificant (Light and Pillemer 1984, Begg 1994). Hence by giving all of our studies the same sample size we weighted them equally in our estimation of true effect size, whereas the likelihood may be that the nine significant studies would have had smaller sample sizes and therefore be weighted less. Even so, in our example if all of the studies that were just significant (Z=1.96) had a sample size of 30, the study of zero effect (Z=0) would require a sample size in excess of 1914 to make the overall mean weighted Fishers zr non-significant (using equation 4.12 on page 71 in Rosenthal 1991 and equations 18-3 and 18-8 on pages 265 and 268, respectively, in Shaddish and Haddock 1994). Furthermore, a study that was just significant in the opposite (counterintuitive) direction would require a sample size in excess of 143 which is more than four times the magnitude of the nine significant studies to make the overall weighted Fishers zr non-significant. Hence the difference in sample size between the studies that were significant and non-significant or significant in the opposite direction would have to be extreme to invalidate our example. The extreme differences in sample sizes between significant and non-significant studies required to negate the effects that we were proposing are likely to be uncommon.