Introduction Quantitative indexing and evaluation is more and more being taken for granted within the scientific community. Meanwhile, it is an established practice to evaluate researchers, departments, or proposals for research grants by relying on the "Journal Impact Factor" of their publication outlets. Consequently, also the European Commission (directly (1)) and the British Research Assessment Exercise (indirectly (2)) rely on quantitative indexing to measure the quality of research output in economics. Individual academic careers, proposals for research grants, or the future of specific departments thus depend on the impact factors gathered by the particular researchers (cf. Lee and Eisner 2008). This standard procedure, somehow surprisingly, doesn't lead to skepticism of researchers. On the contrary, many scientists seem to internalize the rules of the "ranking game" and try to succeed within a given set of institutional mechanisms: That scientists ... try to achieve as much impact-factor-capital as possible has, from my point of view, to be understood as a fundamental law...." (Statement from an anonymous German medical scientist, cited according to Dobusch (2009), translation JK) This attitude is surprising for various reasons: First, it principally accepts the separation of content from the evaluation of academic texts, since the impact-factor calculations or the rankings based upon these calculations only count citations and are not directly concerned with the "intrinsic" quality of a certain contribution. Second, there are various biases incorporated in and numerous problems associated with the standard approaches of quantitative indexing, such as the indices provided by Thomson Scientific (TS), constituting a general problem rarely discussed in the economic community. The relative discrimination of heterodox economics within such an evaluation process is on the contrary a specific problem, only partially related to the general biases incorporated in the TS indices. These mere technical problems have to be understood as part of a larger discussion aimed at the journal culture in economics and other scientific disciplines. Since these topics are obviously related, a few remarks on this debate seem to be helpful to contextualize the arguments presented in this article. Generally, the journal culture in mainstream economics often relies on informal channels: "Top" authors often do not even "submit" their "submissions" but hand them in privately (cf. Shepherd 1995). Many authors anticipate criticism and a priori withhold or change arguments to please the editors or referees ("preference falsification"; see: Davis 2004; Bedeian 2003). Heterodox submissions seem to be, at last partially, rejected due to their methodological or political orientation (Reardon 2008). It is for these reasons that 60 percent of North-American economists agree in a survey that "a 'good-old-boy' network in the profession influences the probability of article acceptance, expressing the same strength and consensus of opinion as for school or business affiliation" (Davis 2007). "Old-boy" hits the point in this context since women are massively underrepresented in mainstream's editorial boards (Green 1998). Moreover, there are documented cases of uncorrected errors in mainstream economic journals (cf. Jong-a-Pin and de Haan 2008), strengthening the impression that review processes and editorial decisions are arbitrary to some extent. This is also evidenced by the noteworthy amount of hot papers in economics that were rejected by the peer reviewers at their first attempt to get published (cf. Gans and Shepherd 1994). Based on these considerations, the structure of this article is the following: First, I review and discuss several drawbacks of the most important quantitative indexing and evaluation standard Thomson Scientifics "Journal Impact Factor" (JIF) and the often perverse incentives related to this method of quality measurement (second and third sections). …