With the number of startup projects exploding, all funding entities are increasingly struggling to find the best cases. Thousands of proposals are submitted to funding programmes that consequently award funding only to 2-3% of them. Are we choosing the best ones? In an environment of hyper-competition, traditional scoring systems might not be effective.
About ten years ago I was having lunch with the CEO of a biotech startup, who had been giving a presentation at a conference. Her startup was on a promising path and they had just raised a significant seed round. While talking about the intricacies or raising funds for tech companies, she told me:
“You know, I am bit concerned about a startup bubble. I have been working as a researcher for years, and this company is the result of it. We aim to exploit part of my research. However, I am worried. I have received a quite huge amount of money from Governments and investors and I have the feeling that none of them has really looked into my technology. They just believed what I have told them to believe in my documents. I believe there will be many scams in this “startup bubble”. “
We all know how this has developed. We all know about stories such as Theranos and others.
The question is omnipresent: how to effectively and efficiently evaluate startups? While the number of new startup companies continue to explode, governments, funding bodies, VCs and investors strive to find a way to do it. Most VCs do not even respond to the thousands of requests they receive by e-mail or through their websites, while governments have established mammoth evaluation structures. Startup scouts, looking for the best projects, flourish, while startup events are almost everywhere every day. Given the intrinsically high mortality rate of startups, it is now more difficult than ever to find the needle in the haystack.
The European Innovation Council (EIC) is also facing this problem
Imagine that you were to run the NYC marathon, a highly popular and competitive race, and there were no limits and no corral systems in the start. Would the best ones win or would the slow ones block the whole race? Would the best runners perform their best times? I believe the answer is obvious: NO. The result would be a total chaos, slower runners would impede faster runners to perform good runs, the whole race could even be unfeasible, there could even be accidents and deaths. That’s why the NYC marathon has deployed, over the years, a complex corral system in the start, participation caps and a lottery system.
In Europe, the biggest startup funding programme, the now so-called EIC Accelerator programme (SME Instrument), with 1.9 billion eur only for 2019 and 2020, receives thousands of applications every three months, 4 times a year. The average success rate has been historically around 5%. However, the EIC published that during the last call there was an important growth in the number of applications. A growth of more than 110%, almost 4,000 proposals, were received during the March, 2020 call. That means that the success rate will likely be around 2%, which represents an extraordinary problem for the EIC, taking into account that around 30% of the proposals usually pass the quality threshold. The evaluation process is as follows. The proposals are scored and the top ranked are selected for funding, after a pitching session. However, the difference in the scores of the best ranked proposals is so slim (eg. 13.79 against 13.82) that the process is almost becoming “pure luck”, that is, random.
Consultants and applicant companies are reported to spend about 200 hours preparing these proposals, which actually require very extensive and dense information. This would mean that (200 x 4,000 projects) 800,000 hours, that is, almost a hundred years of “human labour” might have been wasted in the last EIC call!!
On top of these hours, the EIC has a pool of thousands of evaluators who also devote between 2-3 hours per proposal. Apart from the additional investment of time, this also constitutes a total misalignment between the effort made by the applicants (200 hours) for a document that will probably be scored in less than 2 hours.
There are several suggestions on how to ease the burden of such an evaluation and granting process, for instance, by limiting the number of times that a proposal can be re-submitted. However, this would not solve other also very striking issues such as the inherent bias and subjectivity of the evaluation or the power of influence given to a very small number of reviewers.
More radical views propose a lottery evaluation process, and there are actually some governments and organisations (eg. Volkswagen Foundation) that are already using this approach, surprisingly. In the previously link, although the author instead studies the evaluation of research projects (not startups), the process is indeed quite similar to the one the EIC uses for startups. These views propose funding bodies to “test” the lottery approach; a test and analysis, whose results I would love to see. The article also lists many of the advantages of such approach: project variety, impartiality, and efficiency among others.
What can we do?
The whole research and startup community agree that this is indeed a complex problem with no straightforward solution. Some questions however should be assessed, for entities such as the EIC – but also other funding entities – on how to better evaluate the increasingly number of proposals:
Could the EIC deploy a pre-validation system in order to limit the number of proposals? More stringent requirements could be defined in terms of the technology and commercial readiness or track record.
Should the EIC deploy a phased program? In which projects, before receiving millions of euro, would be validated against different milestones? The previous Phase 1 SME Instrument was performing that function and, although with important drawbacks, it was useful as an effective way to funnel the projects.
Should the EIC establish a more sophisticated tech-based system? Here the question is even more profound, could we use technologies such as AI or crowdsourcing to filter or select the best proposals and startups?
Should the EIC include a lottery system? Although a total random selection does not seem very adequate to me, a lottery system applied to the best performing or scored projects would help increase the diversity of projects and would also allow that some of the most disruptive and also risky projects could receive funding, something that I doubtfully see today.
Conclusion
Startup evaluation is a complex process that requires biased human decisions. The extremely high number of proposals that receive funding entities such as the EIC is creating an environment of hyper-competition in which the best might not succeed. I believe it is now urgent to deploy more efficient and effective evaluation solutions, which should incorporate heterogeneous approaches including a phased processes, pre-qualification, human evaluation but also technology and lottery-based systems.