The peer review activities of academics in UK universities should be measured by the research excellence framework if the practice is to be taken seriously by scholars and managers, according to a senior academic.
Russell Foster, professor of circadian neuroscience at the University of Oxford, said that peer review faced “numerous challenges” because younger researchers felt, he believed, less duty to participate in the voluntary process, “perhaps in part because of increased working life pressures [and] there being less time for activities deemed extracurricular by institutions”.
A possible solution, he said, was for peer reviews – be they of journal articles, conference proposals or grant applications – to be measured as evidence of academic output within future REF assessments.
They would thus count towards departments’ overall scores in the exercise and, consequently, towards the amount of quality-related funding that they received.
“If everyone eligible were obligated to take part in, say, 10 reviews per academic year, and if that were to be taken into account as a reflection of the work being done by departments, then universities would have to take peer review more seriously and allow for it within staff timetabling,” Professor Foster said.
The suggestion comes amid growing concern about the effectiveness of peer review in an era in which academics have ever less time to participate in the practice.
An extensive study of almost 15,000 peer review reports conducted by academics at the University of California, San Francisco in 2010 found that, contrary to popular belief, only 8 per cent of reviewers improved in their critical assessment over time. The remaining 92 per cent “deteriorated…in the quality and usefulness of their reviews” as judged by editors.
The paper’s authors blamed “competing career activities and loss of motivation” for the decline.
Philip Moriarty, professor of physics at the University of Nottingham, said that recognising peer review activity in the REF was “in principle something I’d enthusiastically endorse”.
“Publishers get a massive subsidy from the public purse because the majority of peer reviewers are not paid, and so any recognition of the amount of time peer review takes up is a very good thing,” Professor Moriarty said. “Again, however, the devil is in the detail. Would it be the quantity or quality of peer review that is ‘measured’ in the REF? If the latter, how is that judged, especially when the majority of peer review remains anonymous?”
David Sweeney, executive chair of Research England, which oversees the REF, said that while he agreed that universities should be encouraged to incentivise their researchers to undertake peer review, “the REF is not about measuring the work of individuals”.
“I think research is not about having your time boxed to do things; it’s about having a series of incentives which…encourage good collegiate behaviour,” he continued. “That’s better done by encouragement of a process that applies to groups of academics as a whole rather than measuring the work of individuals,” Mr Sweeney said, adding that it was something that should already be covered within the environment section of the assessment.
“The idea that the only thing that will trigger researcher behaviour is being measured in some way in the REF makes it sound like academics are guns for hire who will only act on the provision of a large bounty to fire their weapon,” Mr Sweeney added.