Peer review before research tied to better science

Registered reports model produces better outcomes on all 19 criteria covering novelty, rigour, and importance

June 24, 2021
Brian Nosek, director of the Center for Open Science

A peer review model in which scientific proposals are assessed by journals ahead of the actual research work is being associated with published articles that score better on a wide variety of performance metrics.

The “registered reports” method, which has spread to nearly 300 journals in less than a decade, produced better outcomes on all 19 metrics of value and quality measured in a study by the Center for Open Science.

“Registered reports were rated on average higher across every single criterion that we measured,” said Brian Nosek, a co-founder and director of the Center for Open Science, and co-author of the study published in Nature Human Behaviour assessing the publication model.

In traditional peer review, a scientist submits a manuscript to a journal for expert assessment after the research project is completed. With the registered reports structure, the experts assess the scientist’s proposal before the research work begins.

Previous studies of the registered reports model have affirmed its major expected advantage – that it reduces the bias in the scientific record arising from the tendency of journals to publish only results that affirm a researcher’s initial hypothesis.

Past analyses have also shown that research published through a registered reports model earn journal citations at levels similar to articles approved through traditional methods.

To expand those tests into areas of quality and innovation, Professor Nosek and his colleagues arranged for a group of 353 experts to evaluate research projects reflected in 86 published articles, including 29 that came through a registered reports model.

The other 57 articles, selected for comparison, were an even mix of articles by the same lead author and articles on similar topics published in the same journal around the same time.

The 353 experts were asked to make their evaluations of the research projects at three different points – before and after the study outcomes were known, and after the paper was published.

On all 19 assessment metrics – including rigour of methods and analysis; quality of questions, discussion and results; and creativity, innovation and importance – the registered reports model performed better, Professor Nosek’s team reported.

Yet the format has struggled to gain acceptance. At least 295 journals have adopted the model since its initial use in 2013, primarily in neuroscience and the social-behavioural sciences. But even at those journals, the model accounts for a small minority of their articles, Professor Nosek said.

“I don’t think there is resistance, but it is novel,” he said.

“You have to think, ‘Oh, I’m going to submit something to the journal before I’ve even done it,’ and it really changes the workflow of how you think about your research,” said Professor Nosek, a professor of psychology at the University of Virginia who has been on leave for eight years to run the Center for Open Science.

“The people who do it really love it,” he said. “But doing the first one is a big effort in terms of changing your mindset about how research is done and how you report it.”

Kara Moore, an assistant professor of psychology at Oklahoma State University who has published with the registered report model, saw that dynamic.

“People and systems are difficult to change,” she said. “Academics are overworked, which leaves little room to try new things and to enact change,” especially before “there is widespread acceptance of its value as equal to or better than the traditional publication”.

Universities could help push that acceptance along, Professor Nosek said, by talking more about the model and its benefits, and perhaps by requiring it for some postgraduate theses.

Once researchers try it, he said, they will realise it frees them to concentrate on the scientific process rather than worry whether their results will appeal to journals as publishable.

“The results are the results,” he said, “and what it frees people to do is take very seriously doing the science, not to get certain outcomes.”

paul.basken@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

Great thing indeed. For all the universities spending money on useless mock REFs, your academics' lives will be much better if you implemented this Instead of marking already published material. Provide support through a panel of expert peers which can help push international excellent to world leading research where it can. In fact universities can collaborate to form cross university panels. Universities can also get experts from top universities from across the globe to go on to these panels too. Much better use of money than internal REF evaluations.

Sponsored