The dubious practices that plague psychological research are equally prevalent in some natural sciences, suggesting that the “reproducibility crisis” may not be limited to psychology and biomedicine.
An Australian-led study has found that dodgy tricks of the trade – such as cherry-picking data, ignoring unwanted results and touting unexpected findings as though they had been predicted all along – are widespread in ecological and evolutionary research.
The findings reinforce fears that a “publish or perish” culture and a publication bias towards positive results are undermining research integrity and planting false leads in the scientific literature.
Lead author Hannah Fraser said that activities formerly considered research transgressions were now becoming standard practice. While the consequences are not as “problematic” as they are in biomedicine – where billions of dollars have been squandered on fruitless clinical trials – she said that questionable ecological research unleashed a trail of waste as research funds were frittered on “dead ends”.
“People spend a lot of time chasing their tails,” said Dr Fraser, a University of Melbourne ecologist. The problem tainted science’s overall credibility, giving policymakers a “neat excuse” to ignore inconvenient findings.
The study, reported in the journal Plos One, was based on a confidential survey of more than 800 authors published in 21 ecology and evolution journals. It investigated activities that lie outside acceptable scientific procedure but mostly fall short of outright fraud.
Respondents assessed the prevalence of such behaviour among peers and confessed how frequently they themselves had transgressed. While similar studies have been conducted into psychology research, this is thought to be the first to focus on ecology and evolution.
Almost two in three respondents confessed to having nudged their findings into statistical significance by reporting some and ignoring others. More than two in five said that they had collected additional data after finding that the initially planned observations had failed to deliver statistically significant results – an activity known as data dredging or “p-hacking”.
Just over half acknowledged having pretended that unexpected findings supported their original premises – a practice dubbed “hypothesising after the results are known”, or “HARKing”.
The frequencies of questionable practices were similar to those uncovered in two recent studies into psychological research, contradicting the squeaky clean self-image of natural sciences. “Ecologists are likely to say, ‘there’s a problem in psychology, but we do things differently,’ ” Dr Fraser said.
Some respondents whitewashed their behaviour, with one blaming HARKing on editors’ demands for exploratory results to be “framed as a priori hypotheses”. Another justified rounding off figures to achieve statistical significance, accusing researchers who reported their data precisely of “adherence to conventional practice over understanding of probability”.
Dr Fraser said that she herself had done “a whole lot of questionable things” when she tweaked findings from a master’s project to make them more “palatable” after reviewers from several journals had criticised her manuscript.
“Looking back, it was essentially a big p-hacking experience,” she said. “It didn’t feel like it because I was just responding to reviewer comments. I suspect that kind of thing happens a lot.”
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login