The world of professional philosophy runs on reviewing. I'll put letters of recommendation (for jobs, promotions and so on) to one side here, and focus on the reviews that journals use as a basis for editorial decisions. As I've pointed out elsewhere, a perverse incentive structure has made these worth very little; the low quality of reviews, written by busy people who are not rewarded for taking the time out to do a careful job, is reflected downstream in the low quality of editorial decisions, in the low quality of the contents of our journals, and, eventually, in the decisions (e.g., for tenuring faculty) that take journal publications as inputs.
It's easy to be moralistic, to blame the reviewers and the journals, and to act as though the solution is insisting that people not respond to their incentive structures. However, the source of the problem comes into focus when we think of the relevant units of agency as institutions, in the first place, universities. From the point of view of universities, reviewing is a free resource, which they harvest in order to make their own administrative machinery run -- if you like, it's a commons, much like the old-time shared pastures in which locals could put their cattle out to graze.
The shared resource has what we can think of as a natural carrying capacity; it's hard to pin down just what that is, but you can think of it as the amount of high-quality volunteer reviewing that wouldn't discourage reviewers from continuing to volunteer, and to work at the same quality level. As is typical of free resources, this one has come to be overused. It is to the advantage of each university, taken singly, to consume ever-greater quantities of the resource harvested from the reviewing commons; i.e., universities demand ever more in the way of publication from their faculty, which consumes reviewing. (Our focus here is on journal reviews, but the observation goes for other forms of reviewing as well; for instance, over the past few decades, there has been severe escalation both in how many tenure letters some universities require, and, for all universities, in how much work must be invested in such a letter.) But because the resource is public, no university takes steps to replenish it (i.e., by making it clear that its faculty are expected to produce high quality reviews for journals regularly, and by monitoring and enforcing that expectation). Eventually faculty respond to the incentive structure they live with; something has to give, and what we see is that the quality of assessments is sacrificed.
By way of illustrating this, I'm going to talk through an interaction I had with the editor of Mind, which I think confirms my diagnosis, and is a good indication of how deep the problem goes. A bit of background: my original plan had been to post examples of shoddy reviews, and a former student of mine, Joe Ulatowski, volunteered a couple of them. When I notified Mind that I intended to do this, I heard back from the editor, Thomas Baldwin. He argued against my plan that, first, posting the reviews would help turn "a community whose members work together for the common good into a society in which actions are constrained by the need to minimise exposure to the risk of attacks," and second, that "it is often quite difficult to find reviewers of papers submitted to Mind, and the possibility that disgruntled authors or their champions... might publish their reports with the kind of critical commentary that you propose to add will be a significant further disincentive to acting as a reviewer." He further suggested that if I and others did this, he would have to take the step of ceasing to send reviewer reports along to authors.
I had asked if there were legal reasons not to post the reviews, and so I also heard back from Vanessa Lacey, at Oxford University Press. Her letter warned of legal action against me, the author, and our institutions, on the grounds that the reviews were confidential and under copyright. However, she also added remarks that, while required boilerplate, amount to a useful data point for us:
I would say at the outset that we have complete confidence in the Editorial Board of Mind and in their selection of material for the journal. As the Editor says in his letter to the author, which was copied to you, less than 10% of articles submitted to the journal are accepted.
The Editor also expanded in his letter to you on the benefits for an academic journal of a confidential peer review process and I would maintain that there is considerable public benefit in retaining any process that contributes to a high standard of academic publication.
These responses together pretty much tell us where we are in thinking about the problem. On the one hand, the official view is that the institutions are still working. The publisher points to the 10% acceptance rate -- although that's irrelevant as an indicator of quality if the reviews on the basis of which the decisions are made are no good. (In that case, it merely reflects the ratio of submissions to the publication rate of the journal.) The editor still thinks of the business he is in as "a community whose members work together for the common good." The editor implies that ceasing to send reviews to authors would be a loss, which presupposes that the reviews are still of value to the authors, which in turn presupposes that the reviews are still being carefully written. That is, the official representatives of this particular journal (but I take them to be representative of what you'd get from people in their roles at just about any journal) talk as though we were still living in an idyllic past, before the tragedy of the commons had effectively destroyed the shared resource.
On the other hand, the editor exhibits an awareness of the realities of today, which, as someone who runs a journal, he cannot but have. He emphasizes that it's hard enough to get reviewers already (to repeat my diagnosis, that's because no one has much of an incentive to do them, and because -- remember that 10% acceptance rate -- more and more academics, pressed by their institutions to publish, are swamping journals with manuscripts). He's convinced that the possibility that the community might monitor reviews for quality (in public but anonymously) would serve as a sufficient additional disincentive to make the presently scarce reviewers unavailable. In other words, reviewers currently don't have enough of an incentive to keep them doing it if it turns out they have to deal with a mechanism for making sure the reviewing is done responsibly: they would just opt out. And this is at a journal which is, in many ways, a best case; the position which Mind occupies in the food chain means that if any journal had access to properly incentivized reviewers, it would be this one.
One way or another, fixing the broken institutions, and restoring the quality of our journals, is going to require reviewing the reviewers, and having professional incentives that make that acceptable. The question -- and this is what has me stumped, and what makes me disinclined to try bring down the temple, say, by starting a blog on which philosophers can post their journal reviews -- is how to get there from here. (Would that bring down the temple, as Baldwin suggests? Probably not, but it still may be worth trying, legal warnings notwithstanding, once we have a path to a viable alternative.) The real players in this game are institutions, and the universities are trapped in what looks a lot like a prisoner's dilemma. Each university, in each round, wins by consuming ever more of the public resource, and by not contributing to replenish it. Ask yourself: What would happen to any one institution that insisted that its faculty devote less effort to publication, and more effort to reviewing?
And the problem is even harder than that makes it sound, because universities are of course not individuals. The sorts of decisions we're supposing it would take to address the problem we've described would have to be made by career administrators, and they have their own incentive structure. These administrators, or this is my sense of it, are focused on how the departments they supervise are doing on professionally accepted metrics -- for instance, publication in ranked journals -- as compared with peer institutions. It's not their job to worry about whether those metrics have become devalued. So who within a university administration is likely to see the tragedy of the commons as their problem?