Transparency is a strong theme in recent guidance on the research excellence framework. Indeed, the tone of the funding councils' assessment framework and guidance on submissions leaves me with the impression that the REF team and its panels will be bending over backwards to be as transparent about the process as possible. That document even contains a paragraph headed "transparency" that highlights how the REF's credibility is reinforced by this approach.
Despite this, there remains one area of the REF that is completely lacking in transparency, and that is feedback. For an individual researcher who has submitted four research outputs and who has, perhaps, contributed significantly to an impact case study, there will be no feedback at all.
I understand that the REF is ultimately an instrument with which to allocate funding based upon the assessed quality of individual "units" - usually departments or schools - and that it therefore is not intended to pass judgement on individual members of staff. However, the fundamental basis of the REF will be research outputs and impact case studies, which will account for 85 per cent of the submission score.
These are produced by individual members of staff, who surely have a right to know how their work contributed to that score. But it is likely that they will never find out. I know this because, having submitted four outputs to the last research assessment exercise, I have absolutely no idea how well those papers scored, or whether they were even read by the panel.
The feedback planned for the REF looks no better than that supplied by the RAE, its predecessor. The framework document for the REF details the means by which the results will be fed back to higher education institutions. This includes, of course, publication of research "profiles" for each submitted unit, along with panel and subpanel reports. It is stated that concise feedback will be given to each institution that makes a submission: "We expect to send this feedback only to the head of the institution concerned" (presumably, the vice-chancellor). Once again, there is nothing for the researchers who produced the work in the first place.
The "cost" to universities of not supplying feedback on individual items is substantial. When departments prepare for the REF, all staff will ask, "What constitutes a 4* output?" The answer will necessarily be an educated guess. I often hear colleagues say: "I'm doing work that is broadly the same quality as last time, but I don't know what that quality was because I've never been told." Having that information is even more important for the REF, because impact case studies must be based on previous work of at least 2* quality. The funding councils' archives could tell us whether or not this is so, but since this is information to which we are not privy, once again I'm afraid we're going to have to guess.
If the only feedback universities receive is vague and incomplete, this is not a sound basis for either strategic decisions or bold public statements. Many submitting units reported their 2008 RAE results in the following way: "Research Success: Department Produces World-Leading Research". Closer scrutiny would reveal that 10 per cent of the research activity was rated 4*, so logically 90 per cent of the research was not world-leading.
This raises the question: "Which 10 per cent of the submitted research was world-leading?" There is clearly a "pocket of excellence" somewhere in this hypothetical department, but how can it be encouraged to grow if the department cannot identify where it is? It is quite amazing that we continue to allow this approach to public relations and strategy without having any access to the primary data.
An analogy for the present "transparency" of the REF is that of undergraduates being told all about the coursework they will have to submit for their degree, and all about how their tutors are going to assess it. But when they receive the feedback, it provides only the performance distribution for the entire year group. Students would not stand for this, and nor should we.
Giving meaningful feedback to individual researchers need not be costly. Assessments of individual items are presumably made in writing, and are presumably collated for scoring. These documents could easily be made available to the submitting unit for the cost of a memory stick. Ultimately, our colleagues have peer-reviewed our work, and we have a right to know the outcome.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login