Papers in high-impact journals ‘have more statistical errors’

Analysis of more than 50,000 behavioural and brain sciences articles also suggests their results are less likely to be replicated by others

August 17, 2022
A parking enforcement officer walks past a car with a wheel clamp to illustrate High impact and low quality linked
Source: Getty
Flashy but flawed: findings in highly cited titles were less often replicated

Placing a paper in a high-impact journal can tilt hiring and promotion decisions, but a large-scale study has found a link between such outlets and lower-quality work.

The analysis compared statistical errors from just over 50,000 behavioural and brain sciences articles and the findings of replication studies with journal impact factors and article-level citation counts.

It found that articles in journals with higher impact factors tended to have lower-quality statistical evidence to support their claims and that their findings were less likely to be replicated by others.

Although crunching the data could not show the mechanism behind the effect, the authors say the analysis further undermines the use of bibliometrics as a measure of research quality.

The reputation of high-end journals is often taken as confirmation that the work they publish is not only new and important for other fields of science, but also that the statistical tests used are also correct.

“Not only do you want them to be innovative, you want the quality of the evidence to be stronger,” Zachary Horne, one of authors of the study, told Times Higher Education. “You don’t see that – you actually see the relationship very weakly in the opposite direction.”

Dr Horne, a psychology lecturer at the University of Edinburgh, said the analysis had implications for wider debates around research assessment.

“Administrators and people evaluating science might want to pay more attention to representativeness, sample size, the paper having few errors,” he said, as opposed to falling back on the shine of familiar journal titles.

Previous research has shown that citation-counting can perpetuate long-standing career inequalities because citation habits often disadvantage women and those from under-represented groups.

In their paper, published in Royal Society Open Science on 17 August, Dr Horne and his co-author Michael Dougherty, a psychologist at the University of Maryland, say their findings also show that the misuse of impact factors and citation counts could ultimately promote and encourage bad science.

Although there are now many who spurn the use of impact factors for judging papers or their authors, Dr Horne said they were working within a system that reaches for bibliometrics by default.

“Folks I know who are really aware that these are not necessarily indicators of quality are more open to deviating by hiring somebody who doesn’t have papers in those venues,” he said.

Pushback against the “prestige economy” of academic journals has continued to grow in recent years. A European Union-backed agreement on research assessment bars signatories from using impact factors in personnel decisions and requires them to come up with plans for alternative approaches.

ben.upton@timeshighereducation.com

POSTSCRIPT:

Print headline: High impact and low quality linked

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

A push to end the habit of assessing researchers by their publication metrics is gaining momentum. But are journal impact factors really as meaningless as is claimed? And will requiring scientists to describe their various contributions really improve fairness and rigour – or just bureaucracy? Jack Grove reports

9 December

Sponsored