Harry Enfield's Kevin is one of the most unattractive characters to have appeared on our TV screens. Yet he makes us laugh because, despite his unappealing nature, we identify something of ourselves within him.
All his stamping and screaming of "it's not fair" in utter frustration is something many of us felt as teenagers, even if we didn't behave in quite the same appalling way. Certain aspects of academic life, however, continue to engender exactly that sense of utter frustration.
Kevin used to make me cringe, yet increasingly I find myself sympathising with him. There is so much in academia that isn't fair - most notably the way research funds are distributed.
In addition to creating a system in which money is more important than minds, our assessment culture has generated a plethora of unpleasant side-effects.
Not least of those is the enormous amount of time researchers spend applying for funds relative to their chances of success. No one would begrudge the time spent writing grant applications if the success rate were reasonable - and fair.
In addition, the desperate scramble for funding favours self-promotion, rhetoric and bullshit and, judging from the recent spate of academic misconduct cases, some academics even seem to be tempted into unethical practices.
The increasing number of grant applications has pushed the peer-review system to the limit, creating an A level-style inflation effect where it is impossible for reviewers to distinguish between good and excellent. The result is that as well as being increasingly unlikely, the probability of funding is often fairly random with regard to merit.
One of the most unfair aspects of the research funding system is that it operates in an inappropriate manner. If one were devising a system for allocating research funds from scratch, and if one's goal were to put funding where it has the greatest effect in terms of innovation, creativity and high-quality research, one would probably conduct some kind of study to figure out how best to do this.
Exactly this kind of study has been conducted, not by any funding body but by American animal biologist Peter Abrams, whose work on the subject was published in the journal Social Studies of Science in 1991, which you might think was so long ago that it cannot be of much relevance today. But it may be, so hear me out.
The study was designed to identify the best predictors of research output. In brief, what the study revealed was that the rating an individual's grant application received from the peer-review panel was a rather poor predictor of subsequent success. And the best predictor? Previous performance. If researchers had been successful and productive in the recent past, the chances were that if you gave them funding, they would continue to be productive.
At the very least, as the funding crisis has become more acute, one would have hoped that the research councils might have done their homework and established the most effective way of distributing their limited funds. As far as I am aware, no such analysis has been undertaken.
Different countries use different models for allocating research funding. The model employed in the US is currently worse than that used in the UK, with an even lower success rate and a dreary re-application treadmill for those who get close.
Some (unmentionable) countries have corrupt systems where nepotism appears to be a significant factor. In contrast, Canada's Natural Sciences and Engineering Research Council model seems to be ideal in many ways. Funding in their Discovery Grants programme is allocated, to a large extent, on past performance. Everyone worth their salt gets something, and for big projects there are special schemes with more money.
A couple of years ago, however, the Canadian Government decided that the number of awards in the Discovery Grants scheme was too high (!) and that the C$32,000 (£17,000) average size of each award was too small to be compatible with world-class science, so it decided that a system more like that in the UK or US might be more appropriate.
Contrary to the Government's expectation, an international panel concluded that Canada's research output was excellent compared with that of other countries on a spending per capita basis; that the system encouraged a high degree of research excellence; and that the system sustained a high level of research capability and student training, certainly comparable to that of the US and UK systems when controlled for population and funding.
Most significantly, the panel voted to retain the current funding model. What this means is that Canada's quality of science is similar to that in the UK, but without the massive bureaucratic burden associated with reviewing and administering grants, and with much greater flexibility in the way the funds are used.
Kevin, I think that's fair.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login