The criteria for performance-related pay need to be defined if the system is to be fair, argues Keith Soothill
We have had the rhetoric of performance-related pay in academia for years - but its scope has now widened. At my university, the professorial staff, senior managers and other senior staff of professorial-equivalent grade were warned that the process had changed this year. Instead of most people receiving a small reward, a few would receive substantial rises.
In recognising the adage from St Matthew's Gospel that "many are called, but few are chosen", many perhaps still hoped that they would enter the kingdom of substantial payment. I did. In my penultimate year of full-time teaching, I thought I had done rather well. Performance in 2004 was the test. I had helped to secure a £900,000 Economic and Social Research Council grant; I was co-principal investigator on two Home Office grants and one from the Nuffield Foundation; I was chair of the Department of Health advisory committee for the research and development programme on forensic mental health; I published 13 articles (including eight that were refereed); and I had completed the manuscript for a co-edited book, Questioning Crime and Criminology . On top of all that, I continued with a normal teaching and administrative load in an overstretched department.
I was shocked that my performance was adjudged as "B - fully met normal expectations". A five-point rating from A*, B*, B, C to D had been applied, with the A*s' performance deemed "outstanding". We do not know who these are, as this information is shrouded in secrecy.
My performance was apparently "normal" and the crunch was that "ratings B, C and D do not attract a payment". I suppose I had to recognise that my colleagues were simply performing better than me. Nevertheless, having brought in grants every year for the past 30 years and with more than 200 publications, I refused to believe that I had not "regularly achieved beyond normal expectations".
But I felt I needed to move away from personal angst and consider the system. If my university is the measure, then we need to recognise that the model that universities are embracing with regard to performance review for academics comes from the Dark Ages of medicine when the famous merit award system trapped the higher echelons. Ironically, universities are embracing this system just as medics have abandoned it for a more open and transparent one, with proper feedback.
The pro vice-chancellor - a former Association of University Teachers branch president - claims that at Lancaster University "we are trying to make the system more transparent". This simply means that the process - that is, who makes the decisions - is much clearer. But the facts are that there are no criteria beyond the blandest terms, no one knows who gets the awards and seeking feedback makes clarification after a cell death in prison seem an easy task.
Academics submit a statement summarising their performance during the year and then, in the words of the pro vice-chancellor, "a judgment is formed".
There is no list of criteria against which performance is measured. Perhaps there will not be until a sex or race discrimination case is brought.
Feedback is even more contentious. After a struggle I received some feedback from my head of department and from the dean of the faculty. The fact that they gave different accounts remains a puzzle. But there is a further twist. With academic performance, the university is trying to do the impossible. It publishes rating distributions at faculty level - these show the numbers, but not the names, of those getting awards. Such ratings provide an indication of pay - not performance - and so lack validity. For example, it seems remarkable that only one third of senior staff in the social sciences were rated A* or B*, while approaching two thirds in the Management School achieved this accolade. Few would believe that there is this sort of discrepancy in performance between the two faculties at Lancaster.
To be fair, the vice-chancellor said that "the review took into account factors such as absolute salary level and relativities and responsibility allowances". This is management speak for expressing that it is more difficult to keep management staff than social science staff. But the conflation of "other factors" and "performance" on a single scale produces anomalies that are totally demoralising. To be downgraded by these "other factors", such as market considerations, is degrading and misleading. But there is good news. The university has just agreed to review and change the process. Let's hope that it doesn't get worse.
Keith Soothill is professor of social research at Lancaster University.