Skip to content
Department of Political and Social Sciences

Are funding organisations financing the most promising research ideas?

In an interview, EUI Professor Arnout van de Rijt shares the findings from a field experiment on reassessing research grant proposals.

06 June 2024 | Research

Van De Rijt_news

Writing and assessing research proposals is time consuming. However, are these efforts effective? EUI Professor of Sociology Arnout van de Rijt is the co-author of the article ‘Do Grant Proposal Texts Matter for Funding Decisions? A Field experiment’, together with Müge Simsek from the University of Amsterdam and Mathijs de Vaan from UC Berkeley. The article was published in the journal Scientometrics on the 19 May 2024.

Professor van de Rijt, could you please tell us about the field experiment that reassessed research grant proposals and your findings?

I worked together with Müge Simsek, Assistant Professor at the University of Amsterdam, and Mathijs de Vaan, Associate Professor at UC Berkeley. Together with the Dutch Research Council (NWO) – which is the first entity financing research in The Netherlands and the organisation that assigned to us this research task – we wanted to understand if grant proposal texts considerably influence panel decisions on whether to fund a research idea or not. The collaboration with NWO emerged from earlier collaborative work that also involved Thijs Bol, Sociology Professor at the University of Amsterdam, in which we studied the effects of receiving an early-career grant on the chances for researchers of winning other grants.

What was the aim of your field experiment? How did you conduct it?

Our experiment focused on the first round of applications for an early-career competition from the NWO for individual awards of 800,000 euros of research funding. Many funding agencies organise competitions in two stages to allocate a limited number of grants. To apply, researchers needed to submit full proposal texts, describing their future research endeavours, plus their CVs.

NWO wanted to understand how important the complete grant proposals text is in the first phase of selections organised in two phases. The first stage is very important as a high percentage of the applications get rejected at that moment. In our experiment, we organised a second ‘shadow’ panel – working in parallel with the real selection panel – that re-evaluated all the submitted grant applications. In our ‘shadow’ panel, we divided panellists into two groups: The first group was given the CVs of applicants plus the full texts of the research proposals, while the second group received the CVs plus only a one-page summary of the proposed research.

We wanted to observe if the two groups reached similar decisions on whether to finance the proposals or not. We first evaluated panellists’ agreement in rankings, then we compared panellists' disagreement in scores. Based on our research hypothesis, we expected to have more disagreement among two panellists when only one of the two had read the full proposal, compared to when both had read the proposal text. However, this was not the case.

What are the novelties and the main conclusions from your experiment?

Our first clear conclusion was a confirmation that chance plays a major role in these procedures. Selection results also depends on which panellists evaluate the applications. This happens because panellists usually disagree on the quality of applications. If a first panellist ranks application A above application B, there is a 40% probability that a second panellist would rank B above A. This result is only moderately better than the outcome of a coin flip, where the percentage is 50%. We knew from the start that readers often have divergent opinions on the quality of proposals, so this is nothing new. The novelty was the field experiment and our decision to withhold the texts of the proposals from one group of reviewers.

The most striking finding was that the quality of the proposal did not significantly impact the ranking in the initial phase of this competition for individual research grants. When a panellist could not read the full proposal, because we did not make the text available to him/her, the panellist did not agree any less on the quality of the application with the reviewer who read the full proposal. This suggests that, at least in the first phase of selections, the agreement among panellists on the application's merit is based primarily on the CV of the applicant, not on the full text of the research proposal.

Based on your findings, the CV of the applicant seems to count more than the content of the research. This leads to well-known scholars having an advantage.

Panellists are positively influenced when they see CVs from well-known names, and they might assess proposals coming from those scholars favourably. From the early stages of the career, some researchers get more opportunities than others due to a certain dose of luck, or as they proposed the right idea at the right time, or for other reasons. When a researcher wins a grant, this grant is added to the CV. Moreover, thanks to the grant, the researcher produces more research output. This mechanism reinforces the CV, and this plays a role in the following selection for grants, when the researcher will have an even better CV. Therefore, the inequalities between people with stronger CVs and those with weaker ones increase through this feedback loop. This is commonly referred to as ‘the Matthew effect’.

Do you believe that the findings from your experiment are applicable to selection procedures conducted by other organisations?

We must be very careful with generalisations as selection procedures often have specific elements that differentiate them one from another. For example, I cannot say to what extent our findings from this selection organised by the Dutch Research Council can be applied to the competitions for grants organised by the European Research Council (ERC). However, the results from our experiment can teach a few lessons, as two-stage individual selections for funding are common in organisations all over Europe. On the other hand, in competitions for collaborative projects/research proposals, the omission of the full proposal text could have a much larger impact than in our field experiment.

Based on your findings, what would you suggest funding organisations do?

Our work revealed that the many resources allocated to writing and evaluating research proposals may not have the intended effect of selecting the most promising research ideas. We also saw that randomness plays a big role in selections. Therefore, I believe that we could consider the use of lotteries in the first stage of such selections. In fact, some funding organisations are already experimenting with lotteries; this happens in Australia, New Zealand, but also in Europe, for example in Switzerland and Germany. The system is often hybrid, with a first stage when very low-quality or incomplete applications are directly discarded. For the remaining applications – where a line must be drawn between the lowest-scoring funded proposal and the highest-scoring non-funded proposal – the lottery system is used.

Lotteries could save us a lot of work and a lot of money. In fact, we should not forget that writing extensive research proposals is very time consuming, both for the researchers writing the proposals and for the assessors evaluating them. Scholars also spend time in helping researchers writing proposals, and that time could be used to do research instead. However, it is also true that this effort is rewarding in some cases, as the quality of the proposals increase over time. Another positive element of writing grant proposals is that it forces one to sit down, reflect on their ideas for future work, and develop rigorous research plans.

Do you think researchers would feel treated unfairly by knowing their research ideas are selected or discarded through a lottery?

There is no solid evidence on the causal link between wise funding decisions and major research discoveries. Therefore, it is not clear that the saved time and effort would result in poorer science. Moreover, lotteries could allow some interesting out-of-the-box ideas to emerge. Finally, experimental evidence suggests that people prefer to think that they had bad luck rather than knowing their ideas have been rejected. All these elements would go in favour of the lottery system.

That said, perhaps even more important than addressing biases and burdens imposed on researchers by the grant review system is that funds be distributed more equally. If we accept that it is difficult to decide how money is best spent, then why not disperse it across more recipients? Not only is this fairer, often 100 scientists will be able to accomplish more with 25,000 euros each then one scientist with 2.5 million, even if we could all agree who wrote the very best proposal. This is especially true in social science where we often don’t have to purchase expensive equipment. I believe it would be better for science to give many more a chance, whether this is done through lotteries or through a wider distribution of funds. As the proverb says, let a thousand flowers bloom.

Read the article ‘Do Grant Proposal Texts Matter for Funding Decisions? A Field experiment’, published in the journal Scientometrics.

Last update: 06 June 2024

Go back to top of the page