Placing a paper in a high-impact journal can tilt hiring and promotion decisions, but a large-scale study has found a link between such outlets and lower-quality work.
The analysis compared statistical errors from just over 50,000 behavioural and brain sciences articles and the findings of replication studies with journal impact factors and article-level citation counts.
It found that articles in journals with higher impact factors tended to have lower-quality statistical evidence to support their claims and that their findings were less likely to be replicated by others.
Although crunching the data could not show the mechanism behind the effect, the authors say the analysis further undermines the use of bibliometrics as a measure of research quality.
The reputation of high-end journals is often taken as confirmation that the work they publish is not only new and important for other fields of science, but also that the statistical tests used are also correct.
“Not only do you want them to be innovative, you want the quality of the evidence to be stronger,” Zachary Horne, one of authors of the study, told Times Higher Education. “You don’t see that – you actually see the relationship very weakly in the opposite direction.”
Dr Horne, a psychology lecturer at the University of Edinburgh, said the analysis had implications for wider debates around research assessment.
“Administrators and people evaluating science might want to pay more attention to representativeness, sample size, the paper having few errors,” he said, as opposed to falling back on the shine of familiar journal titles.
Previous research has shown that citation-counting can perpetuate long-standing career inequalities because citation habits often disadvantage women and those from under-represented groups.
In their paper, published in Royal Society Open Science on 17 August, Dr Horne and his co-author Michael Dougherty, a psychologist at the University of Maryland, say their findings also show that the misuse of impact factors and citation counts could ultimately promote and encourage bad science.
Although there are now many who spurn the use of impact factors for judging papers or their authors, Dr Horne said they were working within a system that reaches for bibliometrics by default.
“Folks I know who are really aware that these are not necessarily indicators of quality are more open to deviating by hiring somebody who doesn’t have papers in those venues,” he said.
Pushback against the “prestige economy” of academic journals has continued to grow in recent years. A European Union-backed agreement on research assessment bars signatories from using impact factors in personnel decisions and requires them to come up with plans for alternative approaches.
POSTSCRIPT:
Print headline: High impact and low quality linked
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login