The 2021 Research Excellence Framework results are now available. No doubt institutions of higher education are scrutinising them for what they “show”, and for how they can be spun to present the best public face or perhaps to justify decisions already made.
The executive chair of Research England, David Sweeney, declares that the “exercise has fulfilled its aim to identify research quality across the whole system”. Yet, ironically, this claim would be rejected out of hand if judged by the requirements of research methodology.
To pick out just the most fundamental problem: despite the best efforts of those involved, measuring the quality of individual research products in the REF cannot have high accuracy because the concept is unavoidably fuzzy. It is multidimensional, and in many fields there is not strong consensus among researchers about what it means and how it should be assessed – even if they know bad research when they see it. The problem is illustrated by the disparate views that frequently emerge in the peer-reviewing of journal articles; and, of course, REF results depend on the outcomes of this process across a range of diverse journals.
The REF is one of many exercises in institutional accountability that have become central to modern educational governance. Another highly influential one is the Organisation for Economic Cooperation and Development’s Programme for International Student Assessment (PISA). It shares many of the same methodological defects with the REF, even though the procedures it employs are very different (PISA relies on children taking its tests). What both of them claim to assess cannot be measured consistently and accurately, and the quantitative data they produce amounts to pseudo-precision. It is not just that the margin of error is large but also that the gap between the key concepts and their operationalisation is huge.
Shared by both the REF and PISA is the assumption that because we feel a need for information to answer a policy question, there must be some rigorous means available to supply it. If only life were like that! We may wish to know whether investment in research is producing an adequate “return”, and how this differs across universities. Similarly, it may be felt necessary to know whether the schools in a particular country are performing at a high level compared with those in other countries. But the idea that answers to these questions can be anything more than very rough judgements based on inadequate evidence is wishful thinking. Long ago, economists told us that when we seek information we may reach a point after which little worthwhile is added and costs escalate. We have gone way past that point with both the REF and PISA.
To a degree, both the REF and PISA, like other accountability regimes, amount to rituals designed to show that proper managerial protocols have been applied to “measure performance”. But this is management as fantasy. And the fundamental danger here, all too obvious in the reception of REF results, is that apparently “hard data” are taken at face value as a basis for evaluating institutions, and the units within them. Decisions are made, or at least justified, on a basis whose warrant is inevitably spurious.
The problems with the REF go back to the initial establishment of a research selectivity exercise in the 1980s. A genuine problem was identified: that the allocation of research funds to universities by the University Grants Committee (UGC) seemed to operate in an informal and rather obscure fashion. And this came under challenge as a result of budget cuts.
But with the abolition of the UGC, and the establishment of the Research Assessment Exercise (RAE), the REF’s precursor, there was a shift from the allocation of funding according to the varying needs of institutions towards treating research funding as an investment, seeking to reward excellence and punish institutions that failed to achieve it.
Furthermore, the shift to the RAE and then the REF involved a move from, on the one hand, a concern with satisfying university managements that the allocation of funds among institutions was broadly fair to, on the other, the aim of offering a measure that could tell politicians and the general public whether an adequate return was coming from public investment in university research. This is the point when fantasy accountancy joined fantasy management.
We now suffer from a prevailing conception of public management that makes excessive claims for itself and swallows a huge amount of resources – at a time when public finances are under growing strain. The REF not only involves massive costs, direct and indirect, but also has profound consequences for institutions, and indeed for individual researchers. It distorts the whole process of research through instrumentalising it.
I’m hardly the first to make these points. When will we ever learn?
Martyn Hammersley is emeritus professor of educational and social research at the Open University.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login