I’m planning some renovations in the house, so I’m learning about party wall surveyors. Their role is to resolve disputes between neighbours. But, strikingly, no matter who appoints them or pays for their services, party wall surveyors do not act on behalf of either neighbour. Rather, they act “for the wall”.
Science, too, is a system of interlocking contributions that can be seriously undermined by mistakes and shoddy practice. Hence, surveying the research proposals and results produced by our fellow scientific builders is an important aspect of our work. As a mid-career UK academic, each year I accept dozens of peer review requests (and decline many others) and sit on evaluation panels at least once.
Scientists rarely formulate the purpose of their scrutiny with such clarity as party wall surveyors, but few of us would disagree that peer review is meant to be for neither the author, the funder nor the publisher considering their work, but rather “for the science” alone. Thus, in an ideal world, a peer reviewer is in essence a collaborator, serving to improve ideas and correct results and conclusions.
I would argue, however, that this is increasingly not the main motivation for peer review. Instead, as science becomes more expensive and institutionalised, peer review increasingly serves the bureaucratic need to evaluate science, as a means of determining how scarce funding and positions should be distributed.
Resources for science will always be limited, and the ever-increasing deficit of secure academic jobs clearly reflects that – although it is also driven in part by the misguided notion that depriving people of job security makes them better scientists. These scarcities, in turn, motivate the creation of an artificial deficit in the “markers of esteem” that inform funding and appointment decisions: most notably, space in highly selective research journals such as Science and Nature.
The pool of available resources is determined by politicians, administrators and publishers, but there is nothing wrong with scientists’ getting involved in the distribution; as the Haldane principle states, research funding decisions are best left to scientists. However, it is not always easy to spot the difference between the needs of the market (resource distribution) and of science itself when you have been conditioned to view and evaluate research as a controllable process of generating “deliverables”, whose value is known immediately (or even in advance), rather than the messy and unpredictable foray into the unknown that it really is.
The uncomfortable question that needs asking is whether the deficit of resources in the system is a bigger problem for the science than any flaws in the reasoning, data or scientific productivity that we spot in the work under our review. If this is the case, agreeing to evaluate the work without challenging the status quo might do more harm than good. Errors still need to be corrected and bad science weeded out. But am I really acting “for the science” if I dutifully undermine the “excellence”, “novelty” and “impact” of a peer’s ideas and results, knowing full well that these metrics are rather disconnected from the true values of good science: creativity, reproducibility and integrity?
It is unlikely that the system can be disrupted through peer review alone. But small steps are still possible. Most importantly, I remind myself that, as a reviewer, it is in my power not just to critique, but also to advocate for my peers and their work.
If I show enthusiasm for a manuscript, rather than declaring it not “exciting enough” for a prestigious journal, I’ll give a chance for its junior lead author to progress their career – and for science to retain their talent. If I refuse to penalise a colleague’s productivity when their experimental approach took longer than afforded by a funding cycle, or if they spent time pursuing a risky but exciting hypothesis that did not live up to validation, I’ll contribute to making science a more thorough, ambitious and honest enterprise.
And if I champion a grant proposal rather than meticulously listing all its minor flaws, I’ll make it harder to reject it based on technicalities. So even if it does not get funded, my comments will highlight the huge number of solid proposals that cannot be pursued because of the unsustainably limited funding pool.
Of course, peer review cannot be all about advocacy. Choosing the right balance is tricky, but for lack of a better strategy, gauging where the purpose of a review maps on the spectrum between the needs of science and those of the market could help. I have also come to the conclusion that it’s not too big a sin to err on the side of advocacy – particularly since, as a community, we tend currently to do the opposite.
The added bonus of advocating for fellow scientists and their science through peer review is that even if it doesn’t immediately lead to more funding and jobs, it will make academia a kinder and more positive place. And as positive environments boost creativity and promote healthy risk-taking, this amounts to acting for science.
The author has chosen to remain anonymous.
POSTSCRIPT:
Print headline: Reviewers shouldn’t do the market’s dirty work
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login