As a sociologist and adviser to the Government on research evaluation and research policy, I am no stranger to heated discussions about assessing research quality. As a British academic in Australia, I avidly followed the dissection of the outcome of the 2008 research assessment exercise in Times Higher Education. But I am disturbed by the recent backlash against peer review.
While the disappointed and disaffected vent their frustration and extol the virtues of a metrics-only research excellence framework, care must be taken not to throw out the baby and keep the bathwater.
Much debate has been prompted by a Times Higher Education report on the findings of a paper from a symposium assessing research quality ("A person on the panel is a clear pointer of departmental success", 19 March). In that paper, published in Political Studies Review, Linda Butler and Ian McAllister argue that having a department member on the political science panel in the 2001 RAE was the second-most-important factor in determining outcomes. They believe that "objective" metrics such as citations can replace "subjective" peer review, a conclusion that was contested by four other symposium papers (including my own).
A moral panic ensued as many Times Higher Education readers took the story out of context and projected it on to the 2008 RAE results, claiming that peer review was inherently corrupt. This line of argument could be dangerous for the sector.
First, it plays into the hands of the metrics-only lobby, who argue that research evaluation metrics are transparent, neutral and not socially constructed. Yet we know that citation metrics are not actual measures of research quality, but are the sum of subjective decisions to cite or not to cite. Moreover, citations may be positive, negative or merely totemic.
Because citation numbers vary between different data providers, they can produce different results. Ultimately, numbers are downloaded, fed into a "black box" and rankings are produced. Those who are hostile to peer review tend to naively accept bibliometric results at face value.
Second, in recent years the higher education sector has lobbied against a metrics-only REF. This prompted a detailed government review that highlighted the many limitations of metrics and made a commitment to retain "light-touch" peer review. A renewed call for a metrics-based REF will simply not be credible.
We must ensure that light-touch peer review follows best bibliometric practice and includes the reading and assessment of our publications, not merely gathering peers together to consider spreadsheets and provide rankings based on citations and other data. Metrics and peer review should act as a check on each other.
The third danger is incipient support of an audit culture that leads to a Gradgrinding of university departments. In Charles Dickens' Hard Times, a horse is famously defined as:
"'Quadruped. Graminivorous. Forty teeth, namely twenty-four grinders, four eye-teeth, and twelve incisive. Sheds coat in the spring; in marshy countries sheds hoofs, too. Hoofs hard, but requiring to be shod with iron. Age known by marks in mouth.' Thus (and much more) ... 'Now girl number twenty,' said Mr. Gradgrind. 'You know what a horse is.'"
A horse is, of course, more than this. And the research culture of a department is more than a sum of various quantitative measures. While these "facts" may provide useful data, additional qualitative description is essential to provide a clear and fair picture.
Fourth, and most worrying, is that support for metrics-only assessment further Gradgrinds us by eroding the value of academic expertise. To view peer review with suspicion is to question the integrity of the whole academic community: if we are not to be trusted to judge ourselves, how can we provide a disinterested analysis of the outside world? This argument presages a future where academic engagement in policy-relevant research is restricted to numerical "facts" devoid of interpretation, narrowing both our aspirations and the horizons of our inquiry.
These are all compelling reasons why we should keep both the baby and the bathwater, and retain peer review.
Reflecting on the outcomes of research assessment, some colleagues have observed that we "someday have to decide whether it is worse to have (our) hearts broken qualitatively or quantitatively". But it is not simply a question of peer review versus metrics - best scientometric practice combines both. In the end, it is better for the REF to allow university departments to be more than a sum of their parts and to have our hearts broken both ways.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login