Spending six months on a failed grant bid is too much. Things must change

UK research funding urgently needs to become less onerous and less unfair if we don’t want applicants to simply give up, says Melvyn Smith

二月 7, 2024
A chain gang works with pickaxes against a red background
Source: Getty Images (edited)

Over the years, I have witnessed many talented and ambitious early-career researchers quit the UK or even leave the university sector altogether because they are unwilling to waste time playing what they perceive to be the mug’s game of applying for funding.

I’m still in that game but, increasingly, I do feel like a mug. You put your heart and soul into an application, but success seems to depend more on the quality and identity of competing bids and the available funds in the particular grant round you apply to, rather than on the quality of the bid itself.

For instance, I just spent six months preparing an unsuccessful bid to the UK’s Biotechnology and Biological Sciences Research Council (BBSRC). Although I was not working on it full-time, a significant proportion of my academic life was devoted to a task that ultimately proved to be a waste of time.

But it wasn’t just my time that was wasted. It was also that of all the internal and external partners I worked with, as well as the support staff involved in the submission process.

And then there are all the other unsuccessful PIs. In the feedback, the BBSRC told me that 71 bids to that particular standard/responsive mode round had been “deemed to be of international quality”. It did not say how many were funded, but if we assume it was 30 per cent, that means 50 bids were unsuccessful.

If we conservatively estimate that a single bid requires two months of person-effort, then that’s almost a combined 12 years of work to prepare internationally competitive bids for just a single call, of which more than eight years of effort is completely wasted. That figure might even be much higher – especially if you include all the other rejected bids, too.

UK Research and Innovation has suggested that, across the research councils, a typical standard-mode call receives up to 350 proposals, of which as few as 20 per cent are funded. This means that, on average, five bids must be submitted for one to be successful. That could mean a combined 46 years of wasted academic time per call (or 138 years wasted across the three annual calls) and 10 solid months of individual effort needed per successful bid.

The true figures doubtless lie somewhere in between. But whatever they are, all this inefficiency is not just demoralising for researchers. It is also a waste of taxpayers’ money – which is not great during a university funding crisis. And it undoubtedly renders the UK less competitive.

Worse still, neither the BBSRC nor the Engineering and Physical Sciences Research Council allows resubmissions for standard-mode calls (unless invited, which is extremely uncommon). My rejection email made clear that resubmissions were not permitted “regardless of how competitive an unsuccessful proposal was” (mine was rated as excellent and included two exceptional reviews).

So consortia with commercial organisations that you have spent so much effort researching and building fall apart, and valuable lines of research enquiry and even promising academic careers are stopped dead – unless the bid team can disguise a resubmission as a new and different bid, involving yet more potentially wasted effort.

Even after several decades of experience as an applicant, reviewer and panel member, I don’t pretend that making the system fairer and more efficient will be easy. But I do have a few suggestions.

First, to reduce the waste – for both applicants and reviewers – we should adopt a two-stage bidding process for standard-mode calls. An initial light-touch expression of interest should be followed by a full-bid stage for only around a dozen applicants (the exact number being dependent on the available budget for the given call).

Double-blinding at the initial stage would also help to address any perceived favouritism linked with who the applicant is or where they work – a bias that currently leads to the same names receiving the bulk of the funding. If double-blinding was felt unacceptable, then I would propose increasing transparency by attributing each review to its author.

Fairness could be further boosted by requiring that all bids have the same number of reviews. Currently, some bids only have two, while others have as many as five. This leads to differing levels of assessment rigour as the chances of getting all good reviews diminishes as the number of reviews increases. It only takes one poor review to scupper a bid with otherwise exceptional grades.

That is why reviews themselves should be subject to greater scrutiny. Poorly written ones should be rejected – the positive ones as well as the negative ones. If it is evident that the reviewer can’t be bothered to spend time properly reading the bid, demonstrably does not have the correct expertise, or is unwilling to write a proper review, then their opinion should be discounted. It does not need subject expertise to spot such reviews, and it is almost an insult to the bidding team to have to spend time responding.

Finally, bids rated as excellent or with two or more reviewer grades of “exceptional” should be allowed a single resubmission if they are unsuccessful the first time.

All this would help minimise the impression that the bidding process is a lottery. Perhaps, then, fewer people would decide not even to bother buying a ticket.

Melvyn Smith is professor of machine vision and co-director of the Centre for Machine Vision (CMV) at the University of the West of England, Bristol.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (20)

This, but more so in social sciences: at ESRC, success rates for 2022-23 were just 11.7%.
It's a demoralising process writing detailed proposals that you know are unlikely to be funded. I’ve seriously considered giving up in the past and have known many other talented individuals to have done just that and moved to the private sector. As mentioned in the article, there’s no easy answer to all the problems but a few simple steps such as the two-stage process could save countless wasted hours of work.
When I started out in research in Australia, grant success was less than 10%; the best were successful. Writing a grant helps to consolidate ideas and is never a waste of time. It is nonsense to suggest it is a mugs game-it is integral to academia. How else are we to differentiate the best researchers from the rest?
I don't think there was any implication that one couldn't learn from the bid-writing process or that it shouldn't be competitive. Just that it needs to be a lot more efficient and that we need to be clear with each other about the amount of expert time (and taxpayers money) spent on rejections.
Perhaps some of “the best were successful”, but I am not convinced that all the best grants (top xx%) have been successful under the current UK grant system. Any positive changes toward this goal would be hugely beneficial to academia, and to the competitiveness of the UK – which I think is what this article is about.
You can learn from the process even when the bid is successful. The point is that it is all demoralising and leads to low productivity. The process favours those who are not afraid gaming the system by being dishonest and boost their own profile with fake achievements. It also has a built in Matthew effect, as in those who had grants are much more likely to get more. It has nothing to do with quality of research or ability to carry it out. Finally, I find that truly new research ideas cannot be explored, because the impact cannot be clearly mapped out. So those that get funded will only work on flashing out old ideas and try to commercialise it. Anyone who is serious about progress should go to industry. It is notable that most genuine progress in the last 20-30 years have come out from industry.
We have to remember that we are asking for a lot of money. There should be a high degree of scrutiny. My last BBSRC grant application was £650k (FEC), and I think that is about average. I was explaining to a friend last night how it came to that much. They work in a (charity funded) unit providing mental health support to homeless people with addiction issues. £650k would be enough to fund their whole unit for several years. Instead it's being closed down for lack of funds, while i'm applying to do this research for no other reason than its kind of cool and interesting. I also don't think it takes 2-person months to write a standard responsive mode bid. I wrote the previously mentioned one in more like 1 month. Yes, there would be some time on the admin side, but I don't really think anything like a months worth. That said, I like the idea of a two stage process, as currently used by funders like the Wellcome Trust and the ERC. And I can see a place for limited resubmission - although this might massively increase the amount of submission that need considering, and thus work load on committees etc.
I agree 100% with the author's comments and believe that something needs to done urgently to address the current wasteful grant application process. The suggestions, such as a two stage process, double blind reviewing, a set number of reviews being allowed, and bids rated as excellent being allowed to resubmit, seem completely reasonable. I sincerely hope that bodies such as the BBSRC take notice of this and start implementing positive change. If they don't, the wasteful apparent lottery through which grants are currently awarded may disenchant many of us (with excellent research ideas), to the detriment of UK science and technology. Nowadays we operate in a highly competitive global environment; if UK science and technology wants to survive, it can't carry on shooting itself in the foot in this way. I look forward to hearing what changes the BBSRC intend to make.
I very much agree with the author’s viewpoints regarding the identified issues and the proposed solutions, especially regarding the implementation of a double-blind initial stage and improving transparency and fairness, such as the rejection of poorly written assessor comments. Also, I believe it is crucial not to hinder excellent bids from resubmission solely due to committee workload constraints. Criteria for permitting or disallowing resubmissions should be more transparent, and the procedures for resubmissions and their assessments need simplification. I think it could easily take 2-person months to write a bid. Prior to even the announcement of a call, the consortium needs typically be established, involving numerous meetings and brainstorming sessions. Throughout the preparation phase, the team’s collective efforts in iteratively refining the technical approach, participating in regular meetings, and aligning with the call’s specific requirements demand a substantial amount of time. The amount of work related to ethics, data management, resource planning, IP, internal reviews, etc. are equally significant and should not be underestimated.
Surely, as academics (ie intelligent, analytical and proven problem solvers), when inefficiencies such as this are identified, we really should be doing our best to solve them, not just continually wasting such vast sums of money on what is a deeply flawed system. Well done Professor Smith for outlining the case so succinctly and providing some instantly testable solutions to trial. Any comments from UKRI?
I wrote the following in 2012, when I was a PhD student. It outlines a structured approach to proposal writing, replacing some of the competitive aspects of the process with collegial cohort-based collaboration instead. This could potentially be piloted as an informal layer on top of existing programmes like the UKRI's New Investigator Grants. The suggestion would differ from in-house mentoring at institutions, by being organised per discipline at the national level. Considering the historical technologies for doing science, it makes sense that public funding for research is administered via a competitive, hierarchical model. Science is too big for everyone to get together in one room and discuss. However, contemporary communication technologies and open practices seem to promise something different: a sustained public conversation about research. The new way of doing things would "redeem" the intellectual capital currently lost in rejected research proposals, and would provide postgraduate and postdoctoral researchers with additional learning opportunities through a system of peer support. JISC ran an experiment moving in this direction (the "JISC Elevator"), but the actual incentive structure ended up being similar to other grant funding schemes, with 6 of 26 proposals funded (https://web.archive.org/web/20120602143148/http://www.jisc.ac.uk/blog/crowd/). It strikes me that if we saw the same numbers in a classroom setting (6 pass, 20 fail), we would find that pretty appalling. Of course, people have the opportunity to re-apply with changes in response to another call, but the overheads in that approach are quite high. What if instead of a winners-take-all competitive model, we took a more collaborative and learning-oriented approach to funding research, with "applicants" working together, in consultation with funders -- until their ideas were ready? In the end, it's not so much about increasing the acceptance rate, but increasing the throughput of good ideas! Open peer review couldn't "save" the most flawed proposals; nevertheless, it could help expose and understand the flaws -- allowing contributors to learn from their mistakes and move on. With such an approach, funding for "research and postgraduate training" would be fruitfully combined. This modest proposal hinges on one simple point: transparency. Much as the taxpayer "should" have access to research results they pay for (cf. the appointment of Jimmy Wales as a UK government advisor) and scientists "should" have access to the journals that they publish in (cf. Winston Hide's resignation as editor of Genomics), so, too, do we as citizen-scientists have a moral imperative to be transparent about how research funding is allocated, and how research is done. Not just transparent: positively pastoral.
I note with interest that the Independent Review of Research Bureaucracy, led by Prof Adam Tickell, cited the funding application process itself as the major cause of unnecessary bureaucracy, identifying the amount of effort required verses the chances of success as a key point. It noted that there is a significant opportunity to reduce the amount of wasted effort, so that application processes are proportionate to the chances of success. It also recommended that funders should experiment with application processes to reduce burdens for applicants, including greater use of two-stage application processes, where information required increases in line with the likelihood of being funded. It further recommended limiting the number of applications requiring full peer review. I see also that in the government response mention is made of double-blind stages. These are all urgently needed revisions in my view.
Others in the comments have mentioned success rates as low as 11.7 and 10 per cent. These figures are well below the economic viability of the bidding process, by which I mean the cost of generating bids is greater than the value of funding on offer. This has been shown in various studies. The main issue I see is to reduce effort on unsuccessful grant applications. I see one person commenting has suggested an approach where applicants work with funders to develop project ideas. Sandpits have been used in this way but again tend to favour the same groups as there is often an assessment stage involved in securing attendance. Having a multistage bidding process is another way to allow a strong bid to be worked up, and where unsuitable bids are weeded out early on, before the applicant has invested a lot of work. Adding some form of anonymity for applicants also seems fairer, as does allowing for resubmission of fundable bids.
I agree entirely with the author’s viewpoint - I would also add that the application process could be improved by making the content of the application commensurate with the amount of funds being requested - rather than a flat structure across the board.
I am an ECR and currently have the joy of trying to get a post doc position. As part of this I was employed for 6 months to write 4 bids last year on behalf of a PI. None were funded despite one receiving top scores from 2 out of 3 reviewers with an overall score of of 5.5. The AHRC recently released the panel data for that one and it appears only four bids were funded. So now I’m unemployed and have 3 submitted bids im waiting on which I wrote in my own time, unpaid, with 1 further in progress. I don’t hold out much hope for any of them and have decided these will be my last tries. Am now looking to work in industry instead as the prospect of endless short fixed term contracts dependent on getting more research funding to stay employed is demoralising.
I am an ECR and currently have the joy of trying to get a post doc position. As part of this I was employed for 6 months to write 4 bids last year on behalf of a PI. None were funded despite one receiving top scores from 2 out of 3 reviewers with an overall score of of 5.5. The AHRC recently released the panel data for that one and it appears only four bids were funded. So now I’m unemployed and have 3 submitted bids im waiting on which I wrote in my own time, unpaid, with 1 further in progress. I don’t hold out much hope for any of them and have decided these will be my last tries. Am now looking to work in industry instead as the prospect of endless short fixed term contracts dependent on getting more research funding to stay employed is demoralising.
I am an ECR and currently have the joy of trying to get a post doc position. As part of this I was employed for 6 months to write 4 bids last year on behalf of a PI. None were funded despite one receiving top scores from 2 out of 3 reviewers with an overall score of of 5.5. The AHRC recently released the panel data for that one and it appears only four bids were funded. So now I’m unemployed and have 3 submitted bids im waiting on which I wrote in my own time, unpaid, with 1 further in progress. I don’t hold out much hope for any of them and have decided these will be my last tries. Am now looking to work in industry instead as the prospect of endless short fixed term contracts dependent on getting more research funding to stay employed is demoralising.
I completely agree, the current research funding application system is very inefficient. Many of the suggestions the author presents would make for a much fairer and transparent process which would help to ensure good research proposals don't go to waste.
It’s all subjective I suppose, but it is strange how the AHRC handout millions for what to many seem like lightweight arts projects that appear to offer little real long-term value, while so many arguably more worthy science projects get rejected. No wonder the Taxpayers’ Alliance is calling for new measures to assess the value for money of publicly funded research projects, including an independent mechanism to analyse funding outcomes.
How about an actual lottery if the proposal meets a threshold? In my last rejection, it felt like they are clutching at sraws to explain the decision.
ADVERTISEMENT