TEF results 2018: performance on the metrics

Understanding how and why a university achieved a particular rating has become a tougher job than ever, writes Simon Baker

June 6, 2018
False data illustrating need for oversight in South-east Asian universities
Source: iStock

The first full year of assessments in the teaching excellence framework, released last year, revealed a process that was already fairly complex in how decisions on university awards were reached.

Although universities were initially judged to be gold, silver or bronze on six core metrics, half of which came from the National Student Survey, an analysis by Times Higher Education showed that other factors still came into play before final awards were decided.

Many institutions on the boundaries between gold/silver or silver/bronze on this metrics assessment ended up moving up a category, mainly thanks to the strength of written submissions.

Fast-forward to 2018 and the process has become even more complex, with even more reasons why a university could be upgraded, even if core metrics place it in a certain category to start with.

ADVERTISEMENT

Although there are data for only about 20 higher education institutions this year – those that decided to reapply or were applying for the first time – they suggest that movement from the “core” initial assessment is even more likely.

THE has put together the following sortable summary table of performance on the core metrics, mainly using the same methodology as last year’s metrics table.

ADVERTISEMENT

 

RankInstitutionTEF 2018 Award++ flags+ flags= flags- flags-- flagsAverage Z-scoreWeighted flag score

 

Like last year, institutions in the TEF were given a flag if they performed significantly well or badly (in a statistical sense) compared with a benchmark value in each of the six core metrics (teaching on my course, assessment and feedback, academic support, non-continuation, graduate employment or further study, and highly skilled employment or further study).

An institution could have achieved one of five different flags in each metric: ++ (if they performed particularly well against the benchmark), + (if they performed well), = (if there was no statistically significant difference from the benchmark), − (if they performed badly against the benchmark) and −− (if they performed particularly badly).

Our table shows the number of times an institution achieved a flag in each category and is sorted (although you can change how it is sorted) by TEF award, then flag performance and finally by average Z-score across the six metrics. A Z-score is a numerical value that expresses how far the institution deviated from the benchmark in a particular metric.

ADVERTISEMENT

However, a crucial difference in the TEF this year was that the weighting given to three of the metrics – those from the NSS – was halved when assessors made a first decision about whether a university was gold, silver or bronze.

Therefore, we have added a column showing a weighted flag score achieved by each university. We calculated this by giving any positive flag a 1 (or 0.5 in an NSS metric) and any negative flag −1 (or −0.5 in an NSS metric). Equal flags score zero.

The resulting data give a clue to which universities would have been initially seen as gold, silver or bronze: gold institutions were those scoring positive flags amounting to 2.5 (but, importantly no negative flags) and bronze institutions were those scoring negative flags amounting to 1.5 (regardless of other flags).

Although the weighted flag score in the table above takes positive and negative flags into account, it is possible to see which universities must have shifted from an initial assessment.

ADVERTISEMENT

They include the University of Liverpool, which hits the 1.5 bronze threshold for negative flags but ended up with silver, and also the University of York, which ended up with gold despite getting only two positive flags in NSS categories.

The difficulty is that after this core assessment, many other factors came into play: absolute performance on metrics; how an institution did in “split” metrics looking at specific groups of students; its results in supplementary metrics covering graduate employment, graduate earnings and grade inflation and, of course, the written submission.

ADVERTISEMENT

It means that anyone – including students – trying to understand in a simple way why a university has achieved a certain rating has their work well and truly cut out.

simon.baker@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored