Metascience

Indirect Cost Recovery and American Innovation: Context and Ideas for Reform

How the government covers indirect costs directly affects scientific innovation
July 24th 2025

This policy brief is adapted from an NBER working paper by the authors, titled “Indirect Cost Recovery in U.S. Innovation Policy: History, Evidence, and Avenues for Reform.”

On April 4, 2025, a federal court issued a permanent injunction in Commonwealth of Massachusetts v. National Institutes of Health, halting the agency’s proposed change to indirect cost recovery. The case is currently under appeal; updates can be found here.

Executive summary

A proposed NIH policy change could cut more than $6 billion in funding for American biomedical research, including more than $100 million in cuts at America’s most innovative research institutions, whose discoveries most often lead to new FDA-approved drugs and commercial breakthroughs. The NIH policy would cap the amount of funding universities can claim for the “indirect costs” of research — which include the costs of research facilities and administrative expenses, such as research security and compliance functions — at 15% of each project’s direct research costs. This would effectively shrink most institutions’ NIH funding by 15-20%. Without a large countervailing investment in scientific facilities and support functions, such losses would erode the infrastructure that enables breakthrough science and medicine.

The indirect cost recovery (ICR) system has recognizable weaknesses: it is complex, opaque, and administratively burdensome, with limited incentives for cost efficiency. Even so, it remains America’s primary mechanism for supporting the shared research infrastructure driving US innovation. Although the proposed 15% cap would simplify the administration of science funding, it could significantly reduce incentives for the highest-value research in science and medicine, which requires substantial fixed costs.

We assembled data for 354 NIH-funded institutions over the past 20 years, representing 85% of all NIH funding annually, to examine empirical patterns in ICR funding. This dataset reveals several insights that should inform how policymakers approach ICR reform. Arguably, the most important of these insights is that the current ICR system already delivers less support than it may appear: despite negotiated ICR rates often exceeding 55-60%, effective rates have for decades averaged roughly 40% due to limits on both (i) which indirect costs are eligible for reimbursement, and (ii) which direct costs they can be applied to.

We present several potential reforms to ICR policy that weigh tradeoffs between competing goals: administrative simplicity, cost control, transparency, and adequate support for research infrastructure. While no single approach to indirect cost funding is perfect, each of these alternatives offers a distinct compromise to balancing tensions among the competing objectives of research policy.


What is indirect cost recovery?

“Indirect costs,” also called facilities and administration (F&A) costs, are the expenses that support a research institution’s infrastructure but that aren’t tied to any single project. These expenses include research facilities, shared lab supplies and data resources, research safety and security, utilities, and regulatory compliance functions. 

By contrast, “direct costs” are expenses that can be assigned to specific projects, such as researcher salaries, materials, or project-specific lab equipment.

Because the US primarily funds individual research projects rather than institutions, indirect cost recovery (ICR) has become a key mechanism for helping research institutions that perform federally funded research cover the full cost of that research. For example, when NIH funds a cancer research project, the direct costs might cover the principal investigator’s salary and specialized reagents, but ICR helps finance the cost of running the building, the shared imaging facility, and compliance officers who ensure that research meets safety standards. 

Institutions negotiate a fixed ICR rate with the federal government, based on audited overhead costs from previous research activity. This rate is then applied as a percentage on top of the direct costs of each new grant.

For example, if a university with a 50% ICR rate receives a grant with $100,000 in direct costs, it would also receive $50,000 in indirect cost funding, bringing the total grant to $150,000. In this case, one-third of the total grant goes to covering indirect costs:

ICR Share of Total Grant = ICR / (ICR + 100)
Example: 50 / (50 + 100) = 33%

We will discuss complications and nuances in the next section.

Evolution of indirect cost policy

The federal government began covering indirect research costs during World War II. To support wartime innovation, the US created the Office of Scientific Research and Development (OSRD). OSRD contracted with leading firms and universities across the country to conduct weapons and medical research that helped win the war, producing a wide range of developments that included radar, penicillin, and the atomic bomb.

OSRD aimed to incentivize participation in wartime research among institutions with relevant capabilities. At a time when there was no established federal policy for supporting research, it adopted a “no profit, no loss” principle as its operating model, which meant that institutions would be reimbursed for their full costs of participation without earning additional profit from the research. To ensure institutions could break even, OSRD committed to funding not only investigator salaries, but also indirect costs.

In one of its first policy decisions, OSRD set university ICR rates equal to 50% of the salaries OSRD paid on its contracts. Firms received an ICR rate of 100% because they were expected to pay taxes. 

Even during the war, federal officials and university administrators viewed the emerging ICR system as a pragmatic but imperfect solution. While it provided a simple and expedient way to support contractors’ overhead costs of research, it led to overpayment at some institutions and underpayment at others. 

The successes of wartime research made it clear that the federal government would stay heavily involved in funding research in peacetime. However, instead of a unified, clearly regulated system, each agency developed its own policies on handling indirect costs.

In 1947, the Office of Naval Research attempted a more systematic method for ICR. It used financial reports to calculate average rates for each university, with the same intended goal of “no profit, no loss.” Other agencies, including the newly formed National Institutes of Health (NIH), took a different approach. Instead of aspiring to cover the full costs of research, they aimed to subsidize research proposed by investigators, focusing on enabling specific research projects rather than simply ensuring institutions could break even on their overall federal research participation. In the years following the war, as its funding program was growing, NIH capped indirect costs at 8% of total costs. In 1954, an NIH memo said the existing 8% ICR cap was limiting the growth of research. NIH raised the cap to 15% in 1958, and again to 20% in 1963. Even then, universities said it wasn’t enough to cover their costs.

By 1966, Congress conceded that the NIH ICR caps were too limiting. NIH and other agencies moved to a system of negotiated, institution-specific rates. These rates had no set cap and were based on the amount each institution actually spent on indirect costs. 

The move to negotiated rates did not resolve the debates over the design of the indirect cost recovery system. Over time, new rules have added complexity: In 1991, a 26% cap on the administration component of indirect costs (expenses like compliance with biosecurity or research integrity regulations) was introduced specifically for universities. ICR caps on specific grant categories and expenditures were also added over time.

Critiques of the current system

Critics of the current ICR system argue that it has become complicated, inefficient, and a source of disagreement and confusion between researchers, funders, and universities. Specifically, it:

  • Reduces funding for direct research costs;
  • Encourages universities to inflate both direct and indirect costs;
  • Is difficult to audit and understand;
  • Rewards institutions that spend more on buildings and staff, even if it’s not always efficient;
  • Pays for superfluous activities, as in high-profile cases of universities charging the government for expenses unrelated to research, which have led to stricter ICR standards.

By contrast, others argue that even with today’s high negotiated rates, universities still don’t fully recover their costs, especially since many grants don’t allow indirect cost recovery at the full negotiated rate.

Despite its flaws, ICR remains an important part of how the US funds science today. It is, by a large margin, the predominant source of federal funding for infrastructure and other fixed costs that are necessary to conduct modern scientific research.

What the new data tell us

To better understand indirect cost funding, we gathered data for 354 universities and other research institutions that together account for about 85% of all NIH research funding over the past 20 years. We looked at both their negotiated ICR rates and what we call their “effective” ICR rates, which we define as the indirect cost funding an institution actually receives for every dollar of direct cost funding. 

We determine nominal rates from institutional indirect cost agreements with the government. We use NIH grant data to calculate effective rates as total NIH funding for indirect costs of research divided by total funding for direct costs of research for an institution in a given year. Using these data, we establish several findings about indirect cost recovery:

Institutions’ effective ICR rates are much lower than their negotiated ICR rates

Even though many institutions have negotiated rates between 50% and 70%, what they actually receive is much lower, typically between 30% and 50%. This difference appears to be due to limits and exceptions built into NIH grant rules, which exclude some types of grants from full indirect cost funding, and some types of direct costs from the denominator, or cost base, used to calculate ICR payments. In essence, institutions receive ICR payments based on only a portion of their direct costs.

To see how this works in practice, consider a fictional research institution, ABC University, that has a negotiated ICR rate of 60% and receives an NIH grant with $1 million in direct costs for genomic research over several years.

Imagine that the total direct cost budget breaks down as follows:

NIH excludes indirect costs from some expenses, like major equipment purchases and subawards above the first $25,000 of value. If we remove those two categories, ABC University’s modified total direct costs (MTDC) is:

$1,000,000 – $200,000 – $75,000 = $725,000

With a negotiated ICR rate of 60% and MTDC of $775,000, NIH reimburses:

60% × $725,000 = $435,000

Although 60% is the negotiated rate on MTDC, the effective rate (against total direct costs) is:

$435,000 ÷ $1,000,000 = 43.5%

NIH grants are often paid a lower effective rate than the institution’s negotiated rate. What explains this gap?

Effective ICR rates are calculated as an institution’s indirect cost funding as a percentage of total direct cost funding and represent the ICR that institutions are paid. Negotiated ICR rates are calculated as the ratio of an institution’s indirect costs associated with federally-sponsored research to the “modified total direct costs” of that research, or MTDC. Because some categories of direct costs are excluded from this modified cost base, the denominator for the negotiated ICR rate declines, and the resulting rate mechanically increases, even if the institution’s indirect costs have not changed. Effectively, universities must charge higher rates on a lower permitted cost base to maintain the same level of ICR support. An increasing share of direct costs going to categories of spending excluded from MTDC may explain why negotiated ICR rates have progressively increased over the last 40 years, but effective rates changed little over the same period. 

Universities have lower rates than other institutions

Among NIH grantees, universities actually tend to have the lowest ICR rates. Hospitals and independent research institutes tend to have higher negotiated and effective rates. This difference may reflect the 26% cap on the “A” in F&A (i.e., cost recovery for administrative costs), which applies only to universities, leading them to under-recover the true overhead costs associated with federally-sponsored research.

Negotiated rates have risen consistently for decades, but effective rates have been flat for 40 years

While negotiated ICR rates have risen over time, effective rates have been stable since the 1980s, hovering around 40% on average. This pattern is confirmed by NIH budget data, which show that indirect costs have consistently represented 27-28% of total NIH funding from 2009-2021 (implying a roughly 40% effective ICR rate), despite significant changes in its total budget. The growing gap between negotiated and effective rates appears to be due to grant rules that shrink the cost base.

Effective rates are broadly similar across universities of all kinds

Whether a university is large or small, public or private, wealthy or poor, high-ranked or low-ranked, its effective ICR rates tend to fall in the same 30-50% range.

A flat 15% ICR rate would reduce most institutions’ NIH funding budgets by 15–20%

If NIH adopts a 15% flat ICR rate, most NIH grantee institutions will immediately lose around 15–20% of their NIH funding. The institutions with the most NIH funding would see the largest declines in dollar terms, with several facing reductions of over $100 million per year due to this ICR policy alone.

These findings highlight how the reality of ICR funding may differ from some common perceptions. Any reform efforts ought to take into account effective ICR rates in order to correctly model the true impact of policy changes.

How ICR supports innovation in medicine

We collected additional data on NIH-funded science, private-sector patents that cite it, and new drugs introduced over the last 20 years to explore how NIH-supported research connects to US biomedical innovation. Institutions likely to lose the most funding under a 15% ICR cap are also those whose research is most often cited by companies creating commercial products. For example:

  • Institutions that would face a 10% larger decline in NIH funding on average under the new cap are currently cited by 30% more private-sector patents.
  • These same institutions are currently tied to nearly 50% more value from those patents.

Of universities that had patents on at least two FDA-approved drugs since 2005, all of them had substantially higher ICR rates than the proposed 15% cap and would lose significant funding under the cap.

The institutions with the greatest exposure are thus also those that make the largest contributions to commercial innovation, building off NIH-funded research. Reducing their funding might create short-term savings, but could also result in slower progress in health and biomedical technology in the long term.

Weighing different policy options 

The original conception of ICR was a pragmatic but imperfect solution. It’s worth exploring the potential alternatives, including their advantages and disadvantages. 

Keep the current system of negotiated rates

The current system has several advantages: it encourages universities and other institutions to participate in federal research. It also provides flexibility and incentives to invest in the facilities they need to do it. On the other hand, it is complicated and costly to manage, and provides limited incentives for cost-efficiency. The current system subsidizes both productive and unproductive investments in infrastructure indiscriminately, and could encourage growth of expenditures simply for the purposes of collecting more ICR funding.

Set a flat rate of 15%, as proposed by the current administration

A flat 15% ICR rate would be simple and easy to enforce, but it would significantly reduce incentives for research, especially for high-potential but high-fixed-cost research, such as research that requires gene sequencing centers, imaging facilities, biocontainment labs, or clinical trial infrastructure. This could, in turn, threaten the large economic and social benefits the US has historically achieved from NIH-funded research. More generally, federal research funding is how the government has long incentivized universities to participate in public science. A 15% ICR rate could significantly alter the types of science research institutions undertake, with universities shifting away from research requiring cutting-edge facilities, equipment, and other shared infrastructure, toward smaller-scale, laboratory-based studies. The major risk is that, absent federal indirect cost recovery, US research institutions would lack the means to pay for the overhead costs of modern science. The impact may be slowing progress and diminishing competitive advantage in areas such as population genomics, advanced medical imaging, and large-scale clinical trials, which have historically driven major biomedical breakthroughs despite their large fixed costs. 

We also note that if the recent NIH proposal were implemented as a 15% rate cap (as opposed to a flat rate), the reform would not realize savings in complexity and burden, as research institutions would continue to have to prepare rate proposals and complete rate negotiations to justify their rates, even for a capped rate.

Set a higher flat rate (40–50%)

A flat rate of 40-50% applied to total direct costs would bring the negotiated rate close to the median effective rate we calculated above. This would still dramatically simplify the system while providing institutions with enough funding to continue investing in research and infrastructure. It would also make universities more cost-sensitive by introducing greater cost-sharing above the fixed rate. Its main disadvantages are that it is likely to overcompensate some institutions while undercompensating others, and may reduce funding agencies’ visibility into university cost structures. Adjustment mechanisms may be necessary to determine whether, when, and how much to adjust the flat rate as research costs and opportunities evolve over time.

Benchmark to institutional peers

A different idea that sits between the aforementioned proposals is to negotiate ICR rates for groups of similar universities. Doing so would reduce the number of rate proposals needed and decrease universities’ incentives for cost inflation, while maintaining the current rate system’s incentives for research and flexibility to accommodate institutions undertaking different activities with different cost structures. Under this proposal, universities would maintain more limited financial records which could be used to set rates for other institutions in a common peer group. Universities might (for example) be grouped by research intensity (R1 vs. R2 universities), by the type of research they conduct (wet lab vs. dry lab vs. clinical research), by size, or by whether they have specialized facilities. 

Alternatively, institutions could be grouped by type (e.g., universities, medical schools, hospitals, research institutes, with one rate determined for each). An obvious challenge is determining peer groups: Overly broad group definitions risk the drawbacks of flat rates, whereas narrow group definitions would approach the current system of institution-specific rates.

Make all costs direct (“above-the-line” accounting)

A fifth alternative is to eliminate ICR and require grantees to budget a share of university overhead into the direct costs of each grant. Instead of receiving a fixed percentage increment over the direct costs of research to cover indirect costs, researchers would need to explicitly request funding for their share of facilities maintenance, administrative support, library services, and other overhead expenses as line items in their grant budgets. Above-the-line accounting would increase transparency by requiring that these costs be directly budgeted, but many costs are nearly impossible to attribute to specific research projects: how much of the university’s Internet service, library acquisitions, or building maintenance should be allocated to any single grant? 

The burden of administering this system would thus rise significantly relative to the status quo, as both researchers and administrators would need to estimate and justify these allocations for every grant proposal. It may also make institutions less willing to invest in shared infrastructure, since reimbursement would depend on researchers successfully securing grants that include adequate overhead allocations rather than being assured through negotiated rates.

Provide institutional or infrastructure grants

A final alternative is to reduce or end the use of project grants to fund overhead expenses, and to replace them with grants that specifically fund infrastructure or fund institutions through block grants. Similar to “above-the-line” accounting, this would make the object of funding explicit. However, this would challenge peer review systems and would involve transfers of large sums that may be difficult to monitor, and for this reason, may be less politically sustainable than project grants. Moreover, infrastructure grants would sacrifice some strengths of the ICR system, which permits universities to make decentralized choices to pursue emerging opportunities rather than requiring NIH to determine on its own what infrastructure is worth funding.

The NIH’s proposed 15% cap could cause significant damage to US research infrastructure, and ultimately to its scientific and technological leadership, especially at universities that contribute the most to science and medicine. Even if funding freed up by the cap were reinvested into additional research grants, increasing the total number of projects funded, the policy is likely to distort the types of research pursued. The 15% cap would steer institutions away from high-infrastructure projects that have historically driven high-impact scientific breakthroughs and supported American biomedical innovation. If this funding were simply cut rather than reinvested, the damage would be even more significant. Although each of these alternatives has drawbacks, each also has clear advantages over the proposed 15% cap.

Summary of ICR policy options

The following table briefly summarizes the advantages and disadvantages of the policy reform options described above and introduced in our recent writing on ICR policy (Azoulay et al. 2025).

Greater transparency can aid ICR reform 

The greatest risk of the proposed 15% ICR cap is that under this policy, the US university system could become a far less productive engine of American innovation, sapping the US of science that the private sector may not produce on its own, and which powers drug discovery and long-term health improvements. None of the alternatives we explored is perfect, but each provides a different balance of simplification and flexibility. Each alternative offers the potential to reduce unnecessary spending without limiting discovery, while maintaining support for the long-term strength of America’s research system.

Beyond these specific reform options, one additional opportunity is to increase the transparency of the ICR system. More broadly, transparency across all aspects of federal science funding is crucial for ensuring taxpayer dollars are used effectively and for maintaining public support for research investments. It’s currently difficult to determine what institutions’ true overhead costs are, how ICR rates are negotiated, what explains differences in these rates across institutions or over time, or to be specific about what kinds of spending ICR is supporting—research or otherwise.

Government offices like the HHS Cost Allocation Services and the Office of Naval Research’s Indirect Cost Branch manage the negotiations with research institutions and may have comprehensive data that could help answer these questions, or else be in a position to collect it. More openness would help policymakers and the public assess what is working well or poorly in indirect cost recovery, which we believe is vital to American innovation and to improving the efficiency and effectiveness of American science policy.