Geoffrey Alderman argues for a more rational, transparent and credible system of comparing universities and their achievements
The Dearing report gives on the new Quality Assurance Agency the task of providing the general public with information about the quality of higher education provision. This recommendation is likely to trigger a debate about the real worth of a range of fashionable performance indicators. We can all agree that the public is entitled to information with which to compare universities. This information needs to be both credible and transparent.
At present, the best-known performance indicators are the ratings given in a variety of university and college guides. Of these, The Good University Guide published by The Times is generally reckoned to be the most authoritative. But how accurate and reliable are the league tables which it publishes?
The tables have been based on a number of indicators, ranging from A-level entry points and degrees conferred to staff qualifications and the results of quality assessments carried out by the higher education funding councils. The rankings they produce have met with mounting criticism, but none as trenchant as that contained in an article by Mantz Yorke, of Liverpool John Moores University, published in the current issue of Quality Assurance in Education.
Professor Yorke's case against The Times is that the choice of indicators, and their particular application, discriminate against the former polytechnics, and are inherently biased towards the old, pre-1992 universities. Take A-level grades. The post-1992 universities go out of their way to attract students with "non-traditional" entry qualifications; but the league tables simply do not take this into account.
And what is the rationale for offering a table which only takes the proportion of firsts and upper seconds into account? A high percentage of first and upper second class degrees might imply an excellent standard of performance. But it might also reflect an over-generous marking regime.
A university has no control over the employment prospects of its graduates. Measures based on "first-destination returns" are bound to discriminate against universities in areas of high unemployment.
The Good University Guide makes much of the reports of the quality assessments carried out by the funding councils since 1993. Popularly but wrongly known as "Teaching Quality Assessments", these exercises seek to measure the totality of the student experience, including student support and guidance, library and other resources, and quality assurance.
In England, the early reports (I wrote a dozen of them) graded university departments as excellent, satisfactory or unsatisfactory. These verdicts which were never meant to be translated into numerical scores, as The Good University Guide tries to do.
Since 1995, quality assessments have awarded individual numerical scores for six "core aspects of provision". A point not made by Professor Yorke is that these scores are not based on any nationally agreed standard. Rather, they evaluate performance against aims and objectives defined by the university being inspected. A number of subjects have scored the maximum of 24 points. All this means is that, in the view of the inspection team, the self-defined aims and objectives were fully met; it tells us nothing about the academic worth of the aims and objectives, which might have been pitched, deliberately, at a modest level.
Perhaps the strangest indicator used by The Good University Guide is that which reflects the proportion of full-time first-degree students who live in accommodation provided by their institutions. As more and more students choose to attend universities near their homes - a trend which is bound to increase once students are contributing towards their fees - it will become difficult to attach any value in this particular statistic because it tells us nothing about the standard of the accommodation provided.
What students require is a qualitative evaluation of the institutions to which they are considering applying, not the tabloid-style quick fix of a set of questionable league tables.
These are encouraged by the present quality assessment methodologies used by the funding councils, which prescribe "graded profiles" rather than threshold judgements.
Dearing is right to question the value of these assessments. Their abolition would itself contribute to the debate which now needs to take place around the construction of performance indicators in which we can all have confidence.
Geoffrey Alderman is pro vice chancellor of Middlesex University.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login