I. INTRODUCTION Scholarly articles are the coin of the realm in modern academia; they influence researchers' career paths, salaries, and reputations (Cole and Cole 1967; Ellison 2013; Gibson, Anderson, and Tressler 2014; Smith and Eysenck 2002), as well as their departments' and universities' rankings (Hazelkorn 2011; Zimmermann 2013). The question as to how to measure the impact or importance of an article is a topic of much debate (Meho 2007; Vucovich, Baker, and Smith 2008), but the appeal of having "unobtrusive measures that do not require the cooperation of a respondent and do not themselves contaminate the response (i.e., they are non-reactive)" (Smith 1981, 84) has made citation counts the de facto standard for measuring scholarly articles' impact. The origin of citation counts dates back to Gross and Gross's (1927) seminal paper (1); since then, the adoption of citation counts as a tool for measuring articles' importance has been overwhelming. Technological advances have had an effect on the popularity of this practice. As in many other fields, the availability and use of detailed data have grown exponentially since the onset of the "data revolution" (Einav and Levin 2014) and have been leveraged in recent years by automated citation indexing services such as Cite-Seer (Giles, Bollacker, and Lawrence 1998) and Google Scholar (see Giles 2005), which collect large amounts of citation data and make it accessible to the general public free of charge. Currently, citation counts are being used not only to measure the visibility, impact, and quality of articles but also to measure the performance of researchers, research laboratories, departments, academic journals and, to some extent, national science policies (e.g., Bayer and Folger 1966; Garfield 1972; King 2004; Narin 1976; Oppenheim 1995; Tijssen, Van Leeuwen, and Van Raan 2002). The influence that publishing has on the careers of scholars (especially young ones) is clearly reflected in the old mantra "publish or perish." (2) One could argue that the use of citation counts to evaluate scientific output has caused this phrase to fall short of the mark. Today, it is not just about publishing; it is about high impact-publishing (i.e., publications with substantial citation counts). Although the value of objectively quantifying the importance of academic papers is evident, a great deal of criticism has been made of the practice of naively using citation analysis to compare the impact of different scholarly articles without taking into account other factors which may affect citation patterns (see Bornmann and Daniel 2008). Among these criticisms, a recurrent one focuses on "field-dependent factors," which refers to the fact that citation practices vary from one area of science to another (with the focus generally being on differences in citation practices between hard science and the social sciences). In some fields, recent literature is cited more frequently than in others (see, e.g., Peters and Van Raan 1994), and different fields may have different structural characteristics which can increase or decrease the probability of a paper being cited. (3) If these arguments hold true, then they should be considered when the performance of researchers, journals, or institutions is being assessed. One good example of field-dependent factors' relevance is how they might affect different journals' impact factors. (4) If a given field tends to cite newer papers, journals that publish papers dealing with that field will clearly benefit in terms of their impact factors, and researchers who are encouraged to publish in high-impact journals will have an incentive to focus their studies on subjects in that field. What about economics? Economics as a discipline has not been exempt from these trends. Economic journals' impact factors and economic departments' rankings and tenure offerings are all influenced, to a greater or lesser degree, by the citation patterns of economic research articles (see, among others, Coupe 2003; Ellison 2013; Gibson, Anderson, and Tressler 2014; Hamermesh and Pfann 2012; Hamermesh, Johnson, and Weisbrod 1982; Hilmer, Hilmer, and Ransom 2012; Ruane and Tol 2008). …