The World University Rankings will be published at 21:00 (BST) on 21 September
Sometimes these blog posts feel like an annual report. What has changed? What did we learn in the past year? What will happen in the future?
Well, after the big updates to data last year, when Times Higher Education moved its rankings data collection and analysis in-house, things seem much more straightforward this time – at least in terms of writing about it.
So what is different?
Overall, the rankings methodology is largely unchanged: we are using the same 13 performance indicators, covering all of a university’s core missions, and the same indicator weightings and the same data sources as last year. But there have been some important improvements.
The first difference is simply one of scope. Once more we have expanded the number of universities that are ranked in the World University Rankings. We are up from the 801 of last year to 980 this time.
Each university in the World University Rankings truly deserves to be there, and every year we see the quality of universities improve. This, of course, has an impact on the overall scores of all universities as the ranges used in the Z-scoring of data change.
We have also made two very slight changes to the eligibility criteria this year. First, the inclusion of more than 500,000 book chapters and books in our analysis of over 11 million research publications means that it is slightly more likely that universities will reach our inclusion threshold of at least 1,000 research publications over five years. Second, we are being more relaxed about the minimum number of outputs per year – we will now allow participants to have as few as 150 papers in a year as long as they hit the overall 1,000 publication threshold.
These two measures provide additional support for universities with a stronger arts and humanities focus, where overall publication outputs may be lower, and also for younger universities. There are a few institutions out there that have made huge strides in terms of their research output in recent years, and we wanted to be able to reflect this.
However, despite these changes, the majority of the new institutions are included simply because we have made further efforts to collect institutional data from a wider range of institutions – most notably in Asia and in Latin America.
Bibliometrics
As always, we are looking to a five-year window for our bibliometric data – this year from 2011 to 2015 inclusive. Since last year, we have made some changes that we hope will make the metrics even more useful.
In all our bibliometric work, we are indebted to the help of our partner, Elsevier, which has spent days working with us to improve the measures.
As I mentioned above, we have now added books to the articles, reviews and conference proceedings we were assessing. Elsevier has also done extensive work addressing the issue of journals that have been suspended from its research publication database Scopus for inappropriate publishing behaviour – this is an ongoing task that ensures that the papers that we do measure really represent good quality research and those from suspended are not counted.
The final area of change in the bibliometrics is in the area of kilo-author papers. Last year we excluded a small number of papers with more than 1,000 authors. I won’t rehearse the arguments for their exclusion here, but we said at the time that we would try to identify a way to re-include them that would prevent the distorting effect that they had on the overall metric for a few universities.
This year they are included – although they will be treated differently from other papers. Every university with researchers who author a kilo-author paper will receive at least 5 per cent credit for the paper – rising proportionally to the number of authors that the university has.
This is the first time that we have used a proportional measure in our citations score, and we will be monitoring it with interest.
Data quality and participation
So we now have 980 ranked institutions – and more than 1,300 participated in data collection. This year, we made some significant improvements to the data collection process, ensuring that it’s harder for institutions to enter data incorrectly, and making our guidance clearer and more consistent.
We’re also pleased that this year the calculation of the Times Higher Education World University Rankings has been subject to independent audit by professional services firm PricewaterhouseCoopers (PwC).
Survey
Our Academic Reputation Survey this year attracted 10,323 respondents from across the world. As with last year, we ensured that the responses we collected were balanced according to Unesco data on researchers by country.
For the World University Rankings, we have combined these responses with those from last year, giving us a combined dataset of more than 250,000 votes.
Other changes
A few other minor changes have been made. The improvements to the data collection portal has enabled us to handle missing values more appropriately, and a suggestion by one of our DataPoints benchmarking tool customers has resulted in an improvement to the subject normalisation used in the papers per member of staff measure.
We’ve also changed from ranking six broad subject areas to eight (splitting business and economics from social sciences, and computer science from engineering and technology). Although that doesn’t make a visible difference to the rankings, it does change the subject normalisation slightly. The eight subject tables will be published on 28 September.
So there we go. Not a revolution, but a series of small, incremental improvements, which I hope make the World University Rankings 2016-2017 slightly stronger, and certainly much broader than in 2015-2016. As for the future…well, that is for another post.
Duncan Ross is the director of data and analytics at Times Higher Education.
Write for our blog platform
If you are interested in blogging for us, please email chris.parr@tesglobal.com