Research funding: big vs. little science

With the delicate economical situation that many developed countries are experiencing in the last years, a significant number of questions and concerns have been risen about how to properly assign and distribute funding to scientific institutions and research group leaders. In particular, a relevant question for science funding could be how to optimize the scientific “output” for a given funding level. Such a scientific output can be, of course, defined in many ways. Examples include, but are not limited to: number of patents, peer-reviewed articles, citations, Masters and Ph.D. theses defended, start-up companies based on scientific advances, spin-off technologies developed, or even groundbreaking discoveries which eventually lead to a U-turn in the mainstream view within a scientific field, and the corresponding Nobel Prize.

To obtain funding, scientific proposals have to be evaluated thoughtfully. Funding agencies, which are responsible for such a task, have to deal with tons of funding applications every year, either at the level of individuals or from big groups constituted by many scientists. An important assumption to keep in mind was that generous funding for a research group typically leads to better scientific results (due to the capacity of hiring more junior researchers, buying cutting-edge equipment, attending to conferences to interact with peers, etc). Therefore, the better the scientific project proposed, the more funding it should get and, hopefully, the better scientific output is obtained out of it.

Usual effect of a rejected grant. Source: www.phdcomics.com.
Usual effect of a rejected grant. Credit: PHD Comics.

This method does not detail, however, how should we distribute the resources to optimize the scientific output: is it better to give small grants to a lot of good projects, or, instead, to give a few large grants to a small set of exceptional high-quality projects? Different governments and agencies have their own preference. Federal agencies in many countries, such as Canada or Spain, have been using preferentially the “many small” strategy, which implies that many research groups will get funding, but each grant will be typically small. The Natural Sciences and Engineering Research Council of Canada (NSERC), for instance, funds most scientists in Canadian universities (around 62% in 2012, 1). On the other hand, large USA funding agencies, such as the National Science Foundation (NSF), prefer the second option (the “few big” strategy), funding only the most exceptional proposals (around the 23% in 2010, 2) with much larger grants. The traditional success of universities and research centers in the USA in terms of scientific impact naturally leads to think that the “few big” strategy was more appropriate, since it promotes competition between researchers to surpass their colleagues, and the best researchers receive lots of funding to develop their promising projects. However, this idea was not tested until recently.

In a work published this year in the journal PLoS One 3, Jean-Michel Fortin and David Currie, researchers from the University of Ottawa in Canada, examined the scientific impact of individual Canadian researchers over a period of four years, and related their performance with the level of funding received (via NSERC grants). Different indicators (number of article published, number of citations received, the most cited article, and the number of highly cited articles) were used to determine the impact of each researcher. The scientific impact determined with this method was, therefore, indicative of the influence of each researcher in the scientific community (and therefore resembling his/her contribution in the collective effort of the scientific community).

The results pointed out that, as expected, scientific productivity increases with funding. But surprisingly, the increase was notably weak, meaning that the productivity of a group is not doubled when its funding gets doubled. In other words, the global scientific output obtained when we give, for example, a grant of 500.000 to an excellent researcher is, on average, smaller than the output we obtain by giving grants of100.000 to five good researchers.

Scientific impact increases only weakly with funding, suggesting that a diversity-target funding strategy, rather than a excellence-target one, might be more convenient. Source: [ref. 3].
Scientific impact increases only weakly with funding, suggesting that a diversity-target funding strategy, rather than a excellence-target one, might be more convenient. | Credit: Fortin & Currie (2013)

The researchers also considered groups which received extra funding from other agencies (such as the Canadian Institutes for Health Research, CIHR), and they found that these groups were not, on average, more productive than the rest of the groups. Impact was, in all cases, a decelerating function of funding. Or, in other words, the impact-per-dollar was lower for large-grant holders.

The conclusions of this study are of vital importance for the design of optimal funding strategies for scientific research. In particular, these results do not support the traditional hypothesis that “larger grants lead to larger discoveries”. On the contrary, funding strategies that target diversity, in the form of many small grants awarded to a large number of research groups, are likely to be more productive than the classical approach of “targeting the excellence” of a few groups.

As a personal note, I consider that this take-home message might be especially important for countries in difficult economical situations and which plan to reform their science funding strategies. For instance, in a recent note to Nature, Carmen Vela, the Spanish science secretary, stated that they “will reduce the number of grants offered each year in the Ramón y Cajal tenure-track programme […] However, the quality of each grant will improve”. The results presented by Fortin and Currie suggests that such an strategy might not be convenient if Spain wants to improve, or even maintain, its level of scientific productivity. They also show that, especially when publishing in a scientific journal like Nature, even politicians should ground their claims on solid data.

References

  1. Natural Sciences and Engineering Reseach Council of Canada (2012) FAQ: Discovery Grants Competition. Available at: http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/Questions-Questions_eng.asp. Accessed 2013 May 2
  2. National Science Foundation (2011) Report to the National Science Board on the National Science Foundation’s Merit Review Process Fiscal Year 2010. Available at: http://www.nsf.gov/nsb/publications/2011/nsb1141.pdf. Accessed 2013 May 2.
  3. Fortin J.M., Currie D.J. & Larivière V. (2013). Big Science vs. Little Science: How Scientific Impact Scales with Funding, PLoS ONE, 8 (6) e65263. DOI:

2 Comments

Post a comment

Daniel Manzano

It is very complicated always to decide how to distribute the fundings. I agree in the conclusions of the PLoS ONE paper, they are well supported by the data analyzed.

On the other hand I would not propose to share the fundings without rewarding excellence. I propose that the fundings should be spread between many different groups depending on their scientific quality. This is a matter of equilibrium, you should not focus the money only in the most productive groups, but also there should be no blank check. Researchers should know that their fundings depends strongly in their production in the las years.

Jorge MejiasJorge Mejias

I agree with you, Daniel. I think a possible lesson to learn from the study is that we should avoid funding to accumulate massively on highly-productive groups, and ensure that smaller or not-so-cutting-edge groups can access a reasonable level of funding as well.

But of course, certain quality levels have to be reached by all groups. Blank cheques are not a good idea in science, I feel.

Leave a reply

Your email is never shared. Required fields are marked.

Required
Required
Required

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>