As part of our Publishing350 program to mark the anniversary of the world’s first scientific journal, Philosophical Transactions, we held a series of debates on the Future of Scholarly Scientific Communication. During Peer Review Week we reflect on the session specifically covering peer review. ‘Is peer review fit for purpose?’ was chaired by Dame Wendy Hall FRS and the debaters were Professor Georgina Mace FRS (for) and Dr Richard Smith (against).



Professor Georgina Mace FRS

Peer review evaluates findings for testability, reproducibility and interpretability. Given the impact of publications on researchers’ careers, this must be done fairly. It has served us well over a long period of time; it’s not perfect but it’s not broken either. It provides quality assurance for the reader and helps authors to improve their article. Journals, in turn, develop a level of credibility and status as a result of how well they do peer review.

Unfortunately, it doesn’t always follow this ideal due to the great pressure placed in it as published volume grows enormously. Sometimes, things that are blamed on the press are really the fault of how the findings were communicated by the scientist, and there are clear challenges with interdisciplinary papers.

Hasn’t peer review allowed us to create the foundation upon which modern science is based?

Journals may have wrongly rejected some classic research in the past, but hasn’t peer review allowed us to create the foundation of findings upon which modern science is based? One real problem is that as science has grown, the academies and learned societies have lost some of their reach and influence; they need to reassert themselves in the process to detect and manage misconduct. But none of these are reasons to reject peer review.




Dr Richard Smith

Peer review is faith-based (not evidence-based) slow, wasteful, ineffective, largely a lottery, easily abused, prone to bias, doesn’t detect fraud and irrelevant. In the age of the internet, we no longer need it – we should publish everything and let the world decide.

Peer review in journals only persists because of huge vested interests. In fact, the evidence points mostly to its detriments. Peer review does not detect errors. The level of inter-reviewer agreement is no better than chance. Peer review is anti-innovatory; there are many examples of ground-breaking work which were rejected when first presented. It’s costly, slow and time consuming. It is poor at detecting fraud since experimental methods and findings described are usually taken on trust by reviewers.

In the age of the internet, we no longer need peer review – we should publish everything and let the world decide.

People under 40 produce the best reviews. If we are going to retain peer review, it should be wide open. Attempts to improve peer review, for example by training and blinding reviewers, do not seem to have produced any improvement. Career structures are the worst possible justification for peer review.

It’s extraordinary that universities and other institutions have effectively outsourced the fundamental process of deciding which of their academics are competent and which are not doing so well. We should at least subject peer review to some studies and collect some real evidence of effectiveness.


What did we learn?


The session was chaired by chaired by Dame Wendy Hall FRS (centre)

Peer review may have value if only because the author is prepared to submit to it. It is often claimed that it helps authors by improving their articles, although preprint servers seem to do this well (arguably better?) through community peer review. There is seen to be a bias in traditional peer review against more original and innovative work which may challenge existing orthodoxy, and often reviews are not carried out by the eminent expert originally selected, but may be passed to other less experienced group members.

With high rejection rates, it is impossible to be fair and decisions can be arbitrary. A move to so-called ‘objective’ peer review is gaining in support as it may be easier and more reliable to judge what is correct than what is original.

A key issue identified was the lack of evidence of the effectiveness of the various forms of peer review and there was a call for more experiments in this area, for example collaborative peer review and post publication peer review with good trackback mechanisms. Most researchers still believe in the principle of review by peers, but have concerns about its practical implementation. Abandoning journal peer review completely is a huge step and there is understandable reluctance to do so. Peer review should has changed a great deal over time, and we need to really understand why peer review has remained for so long before we get rid of it.

This is an edited extract from the full conference report which can be accessed here.

Stuart Taylor is Publishing Director at the Royal Society.


One Response to “Peering at Review: is peer review fit for purpose?”

  1. Mike Taylor

    Having been lucky enough to attend these meetings, my impression was one of dismay at how little evidence there actually is to support traditional (pre-publication) peer-review — none at all, really, or at least none that was cited in the debate. By contrast, Richard Smith cited numerous rigorous experiments, all of which seemed to find very solid results that pre-publication peer-review simply does not do what it claims to to. (Many of these papers are cited in Smith’s 2010 paper “Classical peer review: an empty gun” in Breast Cancer Research 12(s4):S13, which is freely available at )

    My take is that the burden of proof is very much on those who wish to retain classical peer-review to demonstrate its value — not merely assert it. (The argument that I heard most against Smith’s position was, in effect, just this: “But, come on, you know, it’s peer review.) At present, the true reasons for retaining the current system seem to come down to nostalgia (“it’s how things have always been”, which by the way is not true), blind faith (“everyone knows peer-review is the gold standard”) or a fear of acknowledging the sunk cost that we have all ploughed into the present peer-review system, cost which we desperately want to believe is realised as value.

    Does peer-review actually do what it says? Does it provide anything like an objective evaluation of the work? Does it keep bad science out of the literature? Does it improve the quality of what does get published? We need to see evidence of these benefits, not just more assertions. Until we do, then the downsides that we know about (long delays, discrimination against authors with female names, perpetuation of an established elite, opportunities for misbehaviour) surely weigh heavily on the scales.

    So I have to accept that on the presently available evidence, pre-publication peer-review is not merely suboptimal, but fundamentally, foundationally broken. If it’s not so, then where are the studies supporting it?