“No man but a blockhead ever wrote except for money,” Samuel Johnson said. On the other hand, according to the Thomson Reuters’ Journal Citation Reports®, journals published a total of 11,291 orthopaedic articles in 2010, and most of the authors, we can safely assume, were neither blockheads nor paid for the piece. Was Johnson wrong? Not necessarily. One could make the argument that authors were paid indirectly. Publication can lead to grant support or prompt a bonus from an academic department. Beyond that, there are nonpecuniary rewards, such as recognition in the media or, more valuably, earning the esteem of one’s colleagues. Even with those examples in mind, Johnson’s larger point — that writing requires incentives — still holds true. Therefore, it is reasonable to ask whether the incentives for academic writing are calibrated correctly. If the rewards are too meager, we face a deficit. If the rewards are too great, excesses abound. There are data to suggest a surplus of academic writing. Of those 11,291 orthopaedic articles written in 2010, more than one-third were not cited as of January 2013, according to the Thomson Reuters’ Journal Citation Reports®, there are other metrics of merit besides the number of times a researcher cites a paper, but that so many papers came up empty suggests a problem. The low rates of citation may reflect shortcomings of the peer review process: either too many lower quality papers pass the filter of peer review, or reviewers failed to wring out the high-quality essence from the manuscripts under consideration. And if the review process is coming up short, it is reasonable to circle back to Johnson’s point and ask whether the incentives for high-quality peer-review are sufficient. Perhaps no man but a blockhead would ever write a high quality peer review except for money (or other similarly powerful rewards). Of course reviewers are not blockhead; still I would argue that the incentives for quality peer review are inadequate to the task. Peer review is hard work. The reviewer must grapple with the soundness of the methods, the validity of the results, and the reasonableness of the conclusions. References must be checked and cited papers may need consultation. Thereafter, the reviewer must write a dispassionate critique, with empathy for the writer and future readers alike. This can be terribly taxing. In return for this work, the reviewer gets … nothing. Well, not exactly nothing. Reviewers can expect the gratitude of the editor; the satisfaction of doing their civic duty as scholars; the power of shaping the medical literature (thereby helping patients); and (one hopes) the inherent pleasure from a job well done. Still, that may not be enough. One way to create incentives for greater participation and better reviews would be to allow reviewers to sign their work, and to publish the review alongside the paper itself. This not only creates a system of accountability, it allows the reviewers to earn a publication credit as well. If an entry on the curriculum vitae motivates the authors, a similar incentive should motivate the reviewers, too. Additionally, these pieces of commentary are likely to be appreciated by the readers. In the end, though, a system of signed reviews is not apt to take root, for at least two reasons. First, taking away the veil of anonymity may take with it the brutal frankness the job requires. And if that were not enough (and I think it is), reviewers will not be “paid” with a publication unless and until the paper under review is published as well and that creates a perverse incentive for a reviewer to support the acceptance of the marginal manuscript. Another way to create incentives for greater participation and better reviews is to grant peer-reviewers continuing medical education (CME) credit for their work. CME credit is perfectly appropriate to the task. The close reading of a manuscript is much more of an educational endeavor than passive listening in a lecture hall, to say nothing of web-surfing in a lecture hall or kibitzing in the hallway outside the lecture hall — and CME credit has been earned for all of those. (Don’t ask me how I know). While providing CME credit may impose costs on the journal, it provides substantial value to the recipient, as most physicians are required to collect CME credits to maintain their state licenses. Along those lines, perhaps the American Board of Orthopaedic Surgery would be willing to deem a reviewer’s record as “Evidence of Professional Standing.” A new approach, in which CME credit is granted to peer reviewers, qualifies as what economists term a “Pareto improvement”: a change that helps some and harms no one. With the advent of new publishing models such as open access journals and Internet-based publications, we can expect a wave of more academic papers in the years to come. As such, the need for high-quality peer-reviewed papers will be even more acute. To help meet that rising demand, better incentives for peer-review may be needed. Providing CME credit for this work is a worthwhile first step.