Introduction
COVID-19 tested every aspect of the scientific apparatus, but it put unique strains on the systems that allocate funding. Some funding mechanisms managed to allocate resources with dispatch. Operation Warp Speed (OWS), the most obvious example, showed that smart federal funding could generate incredible breakthroughs for society at a remarkable pace. At the same time, many features of the federal response proved too slow, failing to prevent avoidable sickness and death.
It was incredibly important to maximize the speed at which pandemic research was supported and rolled out, and multiple federal agencies tried to quickly deploy resources to pandemic-related science. But despite a near-universal sense of motivation to move quickly, the country’s largest scientific funding agencies delivered disparate outcomes. The National Science Foundation (NSF) managed to ramp up COVID-19-related granting four months faster than the National Institutes of Health (NIH); a remarkable pace during an emergency when every day mattered. The results suggest that policymakers should grant additional flexibility for our science funding institutions to expedite review during crises.
Comparing the NIH and NSF
The basic story of scientific grant funding during the pandemic is that while the NIH spent more money, the NSF was able to act much faster in getting grants out the door. The following graph compares the monthly granting rates of the NIH (purple) and NSF (orange). Grants are marked as pandemic-related if they contain words like “pandemic,” “Covid,” “SARS-CoV-2,” or other related terms in their title or description.
The difference in speed for issuing pandemic-related grants immediately jumps out in March of 2020. The NSF’s pandemic granting overtook the NIH, and remained ahead for the next 5 months. The NSF only put out its call for pandemic-related grants in April 2020, yet it hit the peak of its granting the following month. The NIH’s granting didn’t peak until September 2020, even though it had been accepting proposals since February 2020. As we’ll see, its most successful rapid grants program mirrored some of the strengths of the NSF.
To be clear, in dollar-weighted terms, the NIH would spend about 10 times more than the NSF on pandemic-related projects over 2020. But this shouldn’t be surprising, given that the NIH has five times the budget of the NSF and received 13 times more pandemic research stimulus money in April of 2020. What is surprising is that, when the world needed it most, the NSF was able to bootstrap a funding program in months, in a field outside its usual purview, rivaling an organization five times its size.
The NSF’s pandemic spending responded to the increased importance of timing. While in normal years, both organizations have disbursed only about half the year’s money by May, the NSF spiked its pandemic spending in May 2020. While the NIH did amplify its funding of pandemic-related research, it stayed roughly within the normal schedule of the organization: a slow ramp to a peak in September, and a complete pause in October. The NIH didn’t schedule its funding of pandemic research in 2020 all that differently than in 2019. In normal years, the returns to science funding don’t vary much from month-to-month. But during a crisis like COVID-19, a one-month difference in the vaccine research timeline could have saved tens of thousands of lives.
Despite its encompassing name, the National Science Foundation was seen as the “puny partner” of federal science funding when it was created, and it still wields a smaller budget and less influence than similar agencies today. But the NSF went from funding essentially zero pandemic-related research in the past decade to sending over 800 pandemic-related grants out the door, with $140 million dollars attached in a single year. How did the historically puny partner make such a big turnaround into a field it had comparatively little funding expertise in?
What strategies did the NSF use
The NSF was born from Vannevar Bush’s philosophy of scientific freedom and self-governance laid out in The Endless Frontier. His philosophy focused on novelty and experimentation in science more than accountability. While the NSF has since converged to a more process-driven management style and culture, important attritubes of this culture remain when compared to the NIH. In particular, the NSF has Congressional authority to avoid peer review and fund projects with internal-only approval for particular types of grants.
The NSF relied on its special congressional authority to skip peer review to bootstrap its pandemic-related granting. Two pre-existing programs which use this authority enabled the NSF’s speedy response. The RAPID (Rapid Response Research) and EAGER (EArly-concept Grants for Exploratory Research) programs focus on “proposals having a severe urgency,” and “exploratory work in its early stages on untested, but potentially transformative, research ideas,” respectively. Both turn applications around quickly: while typical federal science grants take 9-12 months of review, RAPID and EAGER grants usually provide funding to researchers in less than a month.
Although the NSF’s overall spending stayed roughly even during COVID-19, we see a clear bump in the amount of money it doled out using these fast grants. More importantly, almost all the pandemic-related grants that the NSF gave out in 2020 came from these internal-only review programs.
These programs were essential to the NSF’s pandemic response, but they do have some limitations. The RAPID and EAGER programs can only give grants of two or three hundred thousand dollars respectively, and they only last for one or two years. Still, the funding periods and amounts are not insignificant. At a time when many researchers could not find the resources to fund their pandemic research, a quick infusion from these programs was a lifeline.
The NSF moved faster than the NIH to fund pandemic research in the early months of 2020, and it disbursed its funds mostly through internal-only review programs. But although the NSF moved quicker, the NIH still acted faster than it usually does and, overall disbursed significantly more funding. During 2020, the organization managed to distribute hundreds of millions in extra funds from Congress on top of its usual budget. How did the NIH respond to the increased value of speed?
What strategies did the NIH use
The NIH descends from 18th-century naval hospitals which tested merchants and sailors for infectious diseases. For a century and a half, the NIH was a small organization. But a series of bills in the 1960s and 70s (especially Nixon’s National Cancer Act in 1971) greatly expanded its budget and purview. That legislative expansion built features of the NIH that remain to this day; most notably, Congress required the NIH to seek peer review for most of its funding decisions.
Unable to increase speed by keeping funding decisions internal like the NSF, the NIH had to adopt a different strategy to speed up its pandemic spending. The fastest part of the NIH’s COVID-19 response did not come from the largest part of its budget and focus: grants to research scientists. All of those grants are graphed above. Rather, it came from funding businesses to promote rapid scale-up of test production through the Rapid Acceleration of Diagnostic Technologies (RADx-Tech) program.
In the NIH’s database, the RADx-Tech grants which went to businesses don’t have date information more specific than the financial year, so they can’t be included in the graphs comparing the NIH and NSF at a monthly level. The other two arms of RADx, RADx-rad and RADx-UP, were focused on research scientists, so they are included in the month-by-month comparison.
Even considering the NIH’s speedy funding of businesses, the NSF’s internal review was still faster. Only 47 projects had received Phase 1 funding from RADx-Tech by January 2021, and only 30 of those had moved on to Phase 2, so including them wouldn’t substantially change the graph on grant timelines.
But compared to the NIH’s previous speed moving diagnostics from the laboratory to the mass market, the pace of RADx-Tech was blistering. The program began just a few days after Congress appropriated additional funds to the NIH in April, with an aim to increase testing capacity by millions in the fall of 2020.
RADx applications went through a “Shark Tank” review process, which involved external peer review but differed from the traditional steps employed by the NIH. Since the grants were given mostly to businesses, the review panels had a large contingent of industry professionals, rather than university academics. As a pandemic-era program, a culture of speed was also built into RADx-Tech. Reviewers and applicants bypassed formalities to speedily improve applications, collaboratively formatting requirements and adding missing information. Finally, the applicant pool was small and focused. Both the applicants and reviewers were looking to expand test manufacturing capacity, streamlining both the application and the review process. The average time to funding for successful RADx-tech applications was 35 days.
RADx-Tech helped to double the number of tests produced between the end of 2020 and the start of 2021. Its success shows that dramatically compressing funding timelines is possible even with external peer review requirements. However, some parts of the RADx-Tech program may be hard to replicate. The “Shark Tank” review process relied on experienced industry professionals donating 40 hour weeks of reviewing applications for little to no direct compensation. The RADx-Tech program was also laser focused on expanding manufacturing. The RADx programs which focused on more traditional and wide-ranging scientific research were not able to act as quickly.
Relaxing external review requirements for certain grant categories, mirroring the NSF fast grants program design, would help the NIH replicate the success of RADx-Tech moving forward. The NIH has hundreds of scientists and professionals who are experienced in scaling up technologies and evaluating research. The NIH should be able to form effective “Shark Tank” committees from within its own staff, so that it can move fast without having to rely on donated effort and expertise from external reviewers.
The mechanics of peer review
The increase in speed when moving from external to internal peer review is largely a mechanical effect. Think about the research funding process like an assembly line. Internal review is a vertically integrated system. NSF administrators take in materials from researchers and pass them to the relevant inspectors on the factory floor, who then make a recommendation to the director, who decides whether to ship them out. Research funding isn’t a simple linear process, but with internal review, communication happens under the same roof. Colleagues who know each other and share schedules can pop down the hall to ask questions and get immediate answers. The system’s linkages are tight and results come fast.
Although we can imagine external review as an assembly line making the same product, it requires outsourcing inspections to academic subcontractors from all over the world. These academic subcontractors may be highly qualified in their fields, but it is simply harder to organize this assembly line for speed. Communication moves via email across different work schedules and time zones. These academic subcontractors aren’t compensated well (if at all), so it is harder for the NIH to compel quick results. In systems that move at the speed of the slowest piece, recalcitrant reviewers can cause serious delays.
The NIH’s ultimate adherence to its periodic funding schedule during the pandemic belies a broader lack of flexibility and adaptability in responding to a crisis. We know that the importance and availability of COVID-19 research did not peak in September, when the NIH’s granting did. Even if the NIH had the “correct” tradeoff between peer review and speed before the pandemic, a crisis pulls the optimal tradeoff towards speed. But the NIH was unable to appropriately adjust. The congressional requirement of external peer review is likely not the sole cause for this lack of adaptability. But it is clear, however, that the NSF’s faster response was enabled by the statutory flexibility to work around external peer review that NIH lacks.
But it is clear that the NSF’s faster response was enabled by the statutory flexibility to work around external peer review that NIH lacks.
Of course, the speed of funding decisions is not the only thing that matters in scientific funding, even during a crisis like COVID-19. Supporting research quickly matters less if the research is low-quality. But it’s not clear that the NSF supported more low-quality research by avoiding external peer review. While external peer review is intended to provide quality control by weeding out unpromising research ideas, that approach may be a less appropriate tool to use in a crisis.
The NIH’s year-long funding process did not obviously produce higher-quality research than the quick reviews from its RADx “Shark Tank,” or the NSF’s fast grant programs. The NSF funded valuable research through its RAPID grants program, including the development of the first COVID-19 test to get FDA approval, the Johns Hopkins COVID-19 data dashboard, and both inhaled and micro-needle patch vaccines, the latter of which is currently being scaled up for use in HPV vaccines. These examples don’t conclusively show that the NSF avoided sacrificing quality control for speed, but they suggest that the NSF’s internal team of reviewers funded multiple effective projects that benefited from faster turnarounds. The benefits of speeding up these big successes when they were urgently needed outweighed the hypothetical costs of approving some below-average projects.
In crises generally, the success of a science funder is determined by its biggest wins, not by the average quality of the projects it approves. Science’s impact on the pandemic was dominated by a single technology: the mRNA vaccine. The next most important contributions, likely testing or pharmaceutical treatments, were less important than the vaccine, and the average COVID-19 research project may have had minimal impact. External peer review slows down the funding of all projects to make sure that low-quality research is not funded. This kind of bottom-end quality control is less important in a crisis environment. At crisis-response margins, it’s probably better for science funding agencies to anchor less on quality control and instead take more shots on goal.
How should policymakers respond
We use peer review by default in the scientific enterprise, both to judge the merits of scientific work for publication and to distribute grants. Although a final verdict on the use of peer review in scientific funding is beyond this essay’s scope, the comparison between NSF and NIH suggests that, at the very least, it would be beneficial for our science agencies to experiment with other grant allocation mechanisms.
The RADx-Tech program shows that in times of crisis even external review can be sped up by a relentless focus on speed. However, it may be difficult for the NIH to carry over the benefits of the “Shark Tank” review process to normal times if it must rely on external reviewers who are less motivated. To rapidly fund important projects, the NIH should be able to form “Shark Tank” review panels from its own scientists and administrators.
The NIH’s existing “exploratory research” grant mechanism, the R21, could also benefit from adopting an internal review mechanism. Carrying over procedural requirements from R01 grants to the R21 slows down funding and makes it difficult for the NIH to take advantage of high expected value projects when they are needed most. In times of crisis, the NIH should follow the NSF by tying its exploratory research grants to an expedited internal-only review.
Congress should consider giving the NIH the same kind of flexibility that NSF has to use EAGER/RAPID grants during an emergency. Their differing peer review requirements are the result of contingent events in their histories, not careful analysis or experimentation. The NIH and NSF both use peer review in normal times to fund high-quality research. But when the importance of speed was magnified by COVID-19, only the NSF was able to fully relax its peer review practices to respond. Correcting this imbalance would make the NIH more effective during crises.
Most debates about science funding policy focus on how much is spent, not on the merits of different spending strategies. But for interventions in education, global health, and climate, the median and the best funding strategies result in massively different outcomes. If science funding is like these fields, the gains from spending more effectively would far outweigh increases in spending. It seems likely that measuring the effectiveness of various allocation methods could yield as much of an improvement in the scientific enterprise as simply increasing the amount of money allocated to science. For instance, more rigorous experimentation could measure how program officers rank proposals compared to a peer review consensus and see if, over time, the share of low-quality scientific projects varied. We should also encourage more research on the structural features of science which enable researchers to quickly pivot to urgent work during crises.
If it’s the case that peer review stymied a more effective government response to the pandemic, other default practices in our federal science apparatus may merit closer scrutiny. Peer review is the standard process for the scientific enterprise, but that doesn’t mean it’s a good fit for federal agencies in every context. We can generate many empirical questions about the most effective funding systems, questions with testable results, and use those results to improve our federal scientific apparatus.