The more media coverage that an academic paper gets, the less likely its findings are to be successfully replicated, according to a study focused on psychology scholarship.
The study, published in the Proceedings of the National Academy of Sciences on 30 January, says that ideally “media should cover credible and rigorous research. Yet in reality, the mainstream media tends to highlight research that finds surprising, counterintuitive results.”
“Media attention and replication success are negatively correlated,” it concludes.
Building on smaller, previous studies that found that more surprising findings were less likely to replicate, the research examined more than 14,000 psychology papers, covering nearly every paper published over the past 20 years across six subfields in six top-tier journals.
Academics at Northwestern University, the University of Notre Dame and UCL developed a machine learning model that predicted the likelihood of replication based on a paper’s text and was found to have a level of accuracy on a par with “prediction markets” of academics estimating a paper’s replicability – a method that has in itself been shown to be a good match for manual replication tests.
“The media plays a significant role in creating the public’s image of science and democratising knowledge, but it is often incentivised to report on counterintuitive and eye-catching results,” write Brian Uzzi, Wu Youyou and Yang Yang.
“Deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
The results also showed that the prestige of an author’s university and a paper’s citation rate have no association with replicability in psychology.
However, it did find that replicability is positively correlated with researchers’ experience and competence, based on their publication and citation record.
The study also found that experimental work replicated at significantly lower rates than non-experimental methods, which it described as “worrisome” given that psychology’s scientific reputation “is built, in part, on its proficiency with experiments”.
Manual replications are critical but too expensive to do on a large scale, said Professor Uzzi, a professor of leadership at Northwestern.
“Our work provides the first population-level replication study across subfields of psychology, methods, journals and authorship characteristics that promotes better manual replication and science policy strategies,” he said.
Last year a study found that manual replication studies in psychology were more likely to be successful when a member of the original research team played a part, raising concern about possible distorting effects.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login