Which nations have surged forward on research productivity?

Analysis of rankings data suggests Australia has made ground and without an apparent cost to research quality

November 28, 2018
cricket crowd
Source: Getty

“Publish or perish” is a mantra that is either responsible for many of the perceived modern ills of academia or is a positive driving force, depending on your point of view.

But among the major research nations, where has a push for academics to publish more been felt the most in recent years? And has there been any noticeable contrast with changes in quality?

One of the 13 metrics behind Times Higher Education’s World University Rankings attempts to directly measure the productivity of researchers in each institution, and examining those data at the national level from the past few years gives some interesting results.

For instance, the data suggest that Australia’s universities have seen some of the biggest productivity increases relative to other nations. Of countries with at least 10 universities in the 2016 and 2019 editions of the ranking, Australia had one of the largest leaps in the average score for papers per staff and is now second only to the Netherlands on the metric.



Accounting for the fact that the Netherlands’ universities are almost all in the top 200 provides even better news for Australia: its top 10 universities now achieve the best score for average papers per staff among leading research nations, overtaking the UK as well as the Netherlands since 2016.



Looking at the figures in the context of overall research output also suggests that in some countries, such as China, a rapid increase in research publication has not yet been accompanied by large productivity gains.

So are national policies behind some of these trends?

In Australia, there have been clear policy incentives in the past decade to boost productivity. The most obvious is that until 2017, block grant funding to support research in Australian universities was determined in part by the amount of research published.

However, a review published in 2016 led to this element of the funding calculation being removed and – alongside the evolving Excellence in Research for Australia assessment – there now appears to be a drive directed more towards quality than quantity.

“Tying funds initially to research income and to publications while largely holding the funding steady put universities in the position of having to improve to maintain funding levels – or risk other universities doing better and attracting a higher proportion of funding,” said Conor King, executive director of Innovative Research Universities, which highlighted Australia’s productivity surge in a recent submission to a parliamentary inquiry on research funding.

“The publication factor was the easiest for academics to influence and [it] quickly rose – hence it has now been removed from the funding formula, its purpose achieved.”

Mr King added that the ERA’s focus on research quality had now “helped balance sheer output with consideration for its value”, but a current squeeze on block funding raised the question of whether increases in research output would now stall.  

“It is a live question how much the government can squeeze the base resources [needed] to employ researchers…while looking for greater output, and in particular targeting all new funds to specific projects, expecting the base university capability to provide half or more of the actual expenditure required.”

To judge by rankings data on the citation impact of research, it is quality where Australia still has a little ground to make up on other nations rather than productivity.

However, its push to increase research volume does not appear to have done citation impact any harm, whereas in other countries such as France there do not appear to have been so many quality gains as productivity has increased.

And in some nations – most notably Russia – quality appears to have declined on average as productivity has increased (although the effect of the rankings expanding from 2016 to 2019 may be a factor here and in some other countries).



By and large, however, in the most developed research nations, productivity gains appear to go hand in hand with improvements in citation impact. So does this mean that national policies and evaluations such as the ERA or the research excellence framework in the UK are becoming better at influencing both?

Sergey Kolesnikov, a postdoctoral researcher at Arizona State University’s Center for Organization Research and Design – who has co-authored research on the relationship between productivity and research impact – said that in his view, the better assessment programmes sought not to concentrate too much on one over the other.

“I think that excessive focus either on productivity or quality is equally harmful, especially if the evaluation system is based on a small number of simplistic quantitative indicators, because any indicator is just a poor proxy for a real-world complex phenomenon it measures,” he said.

While the problems of measuring productivity “were well known”, basing decisions on “simple measures” of quality “such as journal rankings or journal impact factors have all sorts of negative impacts, too”.

“So, the movement in contemporary evaluation systems that recognises these problems is not just a shift of emphasis from productivity to quality, but rather a move away from simplistic metrics of one or the other towards more systematic assessment that combines various context-based quantitative measures with qualitative assessment and peer review.”

He said that the evolution of policy in Australia over the past decade might be an example of this, and he also highlighted recent updates to the Wellcome Trust’s open-access policy that emphasised assessing research on the “intrinsic merit of the work, not the title of the journal or publisher”.

This “strong push towards more responsible research evaluation practices within higher education institutions” was also hopefully “a sign of future changes on a nationwide level” too, Dr Kolesnikov said.

So the hope among researchers themselves might be that future gains in both the productivity and the quality of research will be a by-product of sophisticated approaches to research assessment rather than direct attempts to influence them.

simon.baker@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

Would be interesting to know how you define "staff"? The denominator in the results may have a large impact on the scores.
An interesting commentary, but somewhat misleading to refer to a surge of research productivity since a two year comparison is insufficient to evaluate such changes. Top universities and countries have built up their performances steadily over time and have not surged. Now partially retired in Brazil, I worked in Dutch academia for 28 years and personally witnessed how funding organizations, universities and Dutch government worked closely together to ensure high productivity in both teaching and research. To see that the Dutch have the highest publication and citations production in the "Average papers per staff score and average citations" graph, and that Brazil has one of the lowest comes as no surprise to me. There are several intrinsic reasons for these differences deserving further study. However, the current HE University Ranking policy of making no distinction between undergraduate and postgraduate students and lumping all students together is counter-productive; it is the postgraduate students and their immediate supervisors that provide the primary publication workforce and not all faculty, many of whom are mainly engaged in teaching.

Sponsored