This is the third and final piece in Compute in America: Building the Next Generation of AI Infrastructure at Home, examining the American computing infrastructure buildout needed to support next-generation artificial intelligence. The series analyzes the technical challenges of constructing massive AI data centers, projects future computing needs based on AI development trends, and proposes policy solutions to ensure this critical infrastructure is built securely in the United States. An overview of the series is available here.
Executive summary
The artificial intelligence boom has arrived, and with it, mass industrialization. Leading tech companies are planning massive new data centers to train new models and serve them to users. These require unprecedented amounts of power — as much as 5 gigawatts (GW) for a single training cluster, equivalent to the output of several nuclear power plants.1 While American companies currently lead in AI development, there is no guarantee that America will lead the buildout of the next generation of AI computing infrastructure. Countries in the Middle East and elsewhere are aggressively courting AI companies with promises of cheap, abundant power and trillion-dollar sovereign wealth fund investments. And China has now begun its own ambitious AI infrastructure program, with the state-owned Bank of China committing to $140 billion to build AI data centers for top Chinese firms such as DeepSeek, Baidu, ByteDance, and Alibaba.2
Ensuring the most advanced AI data centers are built in America will yield two large advantages. Economically, it means American firms can capture the immense value created by leading the frontier of AI development. Developing the most advanced AI systems here also means that, where necessary, we can leverage superior American intelligence and security capabilities to protect these valuable assets from theft by our adversaries.
However, we must overcome a major obstacle to rapidly build multi-gigawatt AI clusters in America: our existing energy system is simply not ready.
- The amortized cost of AI computing hardware vastly exceeds the cost of electricity for AI data centers, and this hardware is in short supply. This means that AI data centers must run 24/7, and that the coming buildout will require large-scale firm (non-intermittent) power generation.
- 74 US coal plants totalling 35 GW of generation capacity are due to be retired by the end of 2029. These plants could be kept online to power AI data centers, but are a highly polluting source of energy, and will not be sufficient to power the entirety of the AI data center buildout.3
- While natural gas plants can be built relatively quickly, supply chains have little capacity. GE Vernova, the world’s largest manufacturer of gas turbines, is reportedly sold out beyond 2030.4 New gas plants also face uncertain long-term economics due to rapidly improving alternatives, regulatory risk, and corporate decarbonization commitments.
- Next-generation technologies like advanced geothermal and small modular reactors show immense promise, but face financing and scaling challenges due to cost and timeline uncertainty during early stages of development.
- For all these generation technologies, permitting for building new power plants and transmission infrastructure can take years, leading to uncertain timelines.
To secure American leadership in AI, US policymakers must solve these problems. Success will require an “all of the above” energy strategy that reduces the cost of uncertainty for industry.
What is the cost of uncertainty?
Energy project planning often involves hard-to-quantify uncertainty. This can include technological uncertainty — can the project solve the necessary technical problems to deploy within the necessary time frame and below the acceptable cost? It can also come from regulatory uncertainty — will environmental review and permitting processes allow the project to be deployed in time, or will it be delayed for years?5 These and other uncertainties mean that costs and timelines can’t reliably be predicted for next-generation energy projects. Accordingly, investors often demand a higher rate of return to protect themselves.6 This makes it harder for new projects to attract funding, slowing the deployment of new power generation, and raising costs for everyone involved.
The private sector typically assumes the cost of regulatory and technological costs of uncertainty. When federal policy goals require accelerated risk-taking, it becomes the government’s responsibility to help lower these costs. This can be achieved through the careful re-design and implementation of existing financing programs, and through permitting fast-tracks for AI-related infrastructure. Both are achievable using authorities available to the federal government.
Alongside moves to accelerate permitting processes and unlock financing for next-generation energy technologies, the federal government should enter into a strategic partnership with industry to radically improve AI security. Advanced AI systems are becoming more important from a national security perspective, but AI developers and data center operators are ill-equipped to keep them secure from sophisticated adversaries. This represents a clear market failure: it is in the American public’s interest to ensure that powerful models are not stolen and used against us by our adversaries, but American AI developers and computing firms are locked in a race with each other to build ever more powerful models. If they invest in sufficient security to protect their systems from top Chinese state-backed hacking groups, they risk falling behind. The federal government should solve this market failure by linking energy and permitting assistance to security requirements, making private investments in strong security a sensible commercial decision, rather than one that puts a firm at a disadvantage relative to its competitors.
We propose that the federal government establish “Special Compute Zones” — regions of the United States where AI clusters at least 5 gigawatts in size can be rapidly built through coordinated federal action, eventually totaling tens to hundreds of gigawatts across the country. Special Compute Zones would rely on strategic partnerships with top AI labs and computing firms. The government should help finance next-generation power plants and expedite permitting processes. In return, the government should require commitments from top AI and computing firms to invest in security and protect strategically important AI technology from the United States’ adversaries.
Because training clusters can be flexibly located based on power availability, Special Compute Zones can be planned around areas where it is possible to build quickly: including but not limited to federal lands where local control is limited, areas with existing nuclear capacity or soon-to-be-retired coal sites (where large-scale energy support infrastructure already exists), and areas with high potential for next-generation geothermal production.
This is an ambitious goal. A 5 GW cluster is around 50 times larger than the biggest AI clusters today, requiring the equivalent of multiple large nuclear power plants to support. But energy projects at this scale and speed are possible. Between 2014 and 2019, China built 25 GW of new nuclear capacity.7 And, with concerted effort from the federal government, the United States has proven it can radically speed up the development and deployment of new technologies. Since 2000, the average development time of new drugs has been 10 years.8 But in 2020, thanks to Operation Warp Speed, the first COVID vaccine was developed in just 9 months, more than 10x the normal speed, and far faster than any vaccine ever developed.9
We propose a set of ambitious executive actions:
- Appoint an AI infrastructure czar to coordinate the AI data center buildout across agencies
- Conduct a comprehensive review of land and assets available for Special Compute Zones
- Use Defense Production Act authorities to streamline permitting for new energy and data center infrastructure, and solve AI data supply chain issues
- Establish categorical exclusions to NEPA that accelerate federal financing
- Launch new initiatives to radically improve the security of American AI infrastructure, protecting US AI technology against key adversaries.
This agenda would help America maintain its lead in artificial intelligence while protecting sensitive technology from adversaries. Parts of this agenda have already begun. In January, the Biden administration issued an executive order taking many of the basic steps required to enable the rapid deployment of multi-gigawatt AI clusters in the United States.10 However, the executive order has three important gaps:
- While it makes federal lands available for AI data centers, it doesn’t contain a comprehensive plan to speed up permitting in a manner that reduces uncertainty for industry. Nor does it contain tools or recommendations to resolve the looming supply chain shortages, particularly in gas turbines, that energy infrastructure for AI data centers will experience. Both these problems can be addressed through use of the Defense Production Act.
- It requires that AI data centers are powered with exclusively clean energy. This neglects the reality that while next-generation clean energy technologies such as enhanced geothermal and small modular reactors offer a longer-term energy solution, natural gas turbines must form a large part of the near-term solution.
- While it introduces an initial set of security requirements for AI and computing firms, these requirements are not yet sufficient to adequately protect strategically critical American AI technologies from being stolen by our adversaries.
The Trump administration should issue a new Executive Order to address each of these issues, ensuring that increasingly powerful AI systems are both built in the United States and good for the United States.
Introduction
In just the past year, Microsoft leaked plans to build a 5 GW supercomputer by 2028, announced a $100 billion investment partnership with BlackRock and UAE-based MGX to fund global AI power and data center infrastructure, signed a purchase power agreement to restart Three Mile Island’s nuclear plant, and announced it is on track to directly invest $80 billion in AI data centers in 2025.11 More recently, OpenAI and Softbank announced “Stargate”, a new company to invest $500 billion in AI infrastructure within 4 years.12 Google, Meta, Amazon, CoreWeave, Oracle, and others, are all also announcing huge investments.13
At the core of the AI boom is the data center. In the last piece in this series, we forecasted that the global size of the AI data center ecosystem could grow by more than 130 GW over the next five years — essentially adding another Japan to the world’s power consumption.14 While data centers that serve models to users are likely to be built closer to customers, the data centers used to train AI models will be built wherever sufficient energy is readily available. Whichever country is best at building the infrastructure used to train the most powerful AI models will have outsized influence over AI more broadly, as training is upstream of AI deployment. The location of training infrastructure is therefore central to determining who shapes the future of AI.
Despite announced investments in AI data centers by US tech companies, it’s unclear how much of this infrastructure will be built domestically versus abroad.15 AI companies are balancing commitments they’ve made to decarbonize with imperatives to deploy models quickly — imperatives that require large amounts of firm (non-intermittent) energy, typically from fossil fuels.16
Furthermore, while AI data centers built in the United States will be more secure than their foreign counterparts, American AI developers and data center operators are still ill-equipped to keep breakthrough models secure against sophisticated adversaries. Training powerful AI models is a hugely capital- and energy-intensive process, but if the technology is stolen, a model can be deployed and misused by our adversaries with relatively modest investment. Despite these vulnerabilities, American AI and computing firms are underinvesting in the level of security required to defend against nation-state-grade attackers.17 This is because firms are locked in a race with each other to build ever more powerful models. If they invest in sufficient security to protect their systems from top Chinese state-backed hacking groups, they risk falling behind. This underinvestment in AI security is a significant market failure. The theft of AI technology represents a significant risk to US national security and public safety, beyond the direct commercial consequences to firms. The key problems to address are cost and expertise. Private companies generally lack sufficient expertise to defend against the most sophisticated attackers, requiring assistance from the US intelligence community.
Energy policy provides a solution. The most valuable benefit the US can offer top AI and computing firms is easier and faster access to energy. This benefit should be used as an incentive to solve market failures in security, making investments in nation-state-grade security a sensible commercial decision across the industry, rather than putting firms at a competitive disadvantage.
The Biden Administration’s executive order, “Advancing United States Leadership in Artificial Intelligence Infrastructure,” was ostensibly designed on this premise. But it falls short in three important respects:
- While it opens up new federal lands and energy resources for AI data centers, it fails to directly address the permitting issues this creates,18 or the looming supply chain shortages, especially in gas turbines, that will hamper the data center buildout.
- It requires the AI data buildout be powered by clean energy generation. This puts firms looking to build in the US at a disadvantage.19 Policymakers must acknowledge that natural gas turbines need to form a large part of the near-term solution, as cleaner alternatives need longer timeframes to scale.20
- While it introduces an initial set of security requirements for AI and computing firms, these requirements are not sufficient to adequately protect strategically critical American AI technologies from being stolen by our adversaries.
A better strategic partnership on AI and energy can be reached, a partnership that ensures that increasingly powerful AI is both built in the United States and good for the United States. The government should expedite permitting processes and make financing easier for energy projects. In return, it should require security commitments and collaborations from and with private companies to ensure that the most strategically important technology of our time is not ceded to our adversaries. This partnership would allow AI companies to build data centers in America using next-generation energy technologies, bootstrapping a new era of energy creation in the United States.
The last piece in this series explored the technical challenges to building the future of AI infrastructure in America. In this final piece, we propose an ambitious executive branch policy agenda to solve those challenges.
Building in America vs. building abroad
Since 2020, American companies have developed around 70% of the world’s most compute-intensive AI models, largely due to their unparalleled ability to scale computational resources for training.21 This is a crucial geopolitical advantage — developing breakthrough systems here means we can leverage superior American intelligence and cybersecurity capabilities to protect these valuable assets from theft or attacks by those who would misuse them. It also ensures that the US can meaningfully oversee the development of powerful AI models and prevent them from being misused by bad actors. Economically, it means American firms can capture the immense value created by cutting-edge AI development.
The US possesses favorable conditions for continuing to build the most advanced AI data centers at home: world-class AI, chip design, and cloud computing firms, and abundant, low-cost energy sources. Our natural gas production, responsible for more than 40% of electricity generation, provides a cleaner alternative to coal-dependent nations like China.22 The western US has promising geothermal resources with even greater potential than natural gas, and those resources overlap substantially with areas that can be readily leased by the Bureau of Land Management.
Despite the advantages to building in America, AI firms are looking abroad to fulfill their energy needs, especially in the Middle East.23 The Gulf nations offer cheap fossil fuels and trillion-dollar sovereign wealth funds that can make upfront capital expenditures, saving computing companies from having to make those investments. The United Arab Emirates and Saudi Arabia are rapidly expanding their energy supply, with the UAE expected to triple the amount of power used for data centers between 2022 and 2026.24 This shift presents a challenge to American dominance in AI infrastructure and underscores the need for strategic planning to maintain a competitive edge. However, while the Gulf nations are offering fossil fuels, most of the top firms building AI data centers have made pledges to decarbonize by 2030. If America can make it easier for them to use emissions-free, non-intermittent (“firm”) power, building data centers abroad will become less attractive. There are also other advantages for firms that build their data centers in the US:
- Designing data centers at the gigawatt scale is hard, and access to the top technical talent is key. In areas such as data center design, network architecture, and site reliability, the US has the world’s best talent.
- Companies that build outside the US risk being subjected to continuously expanding American export controls on high-end chips and other critical AI infrastructure. While the initial set of AI chip expo rt controls in 2022 focused on China, a 2023 update to the controls added an additional 43 countries, many of them in the Middle East.25 More recently, the Department of Commerce’s “AI Diffusion Rule” created strict requirements for data centers abroad, capping the number of chips many data centers can receive.26
- Attempted theft of commercial IP from data centers (cyber or otherwise) is less likely to happen in the US, and less likely to succeed if it does. The American intelligence community has substantially more resources, expertise, and deterrence capability than comparable organizations in other countries, and is empowered to respond to attacks on assets located in the US.
To ensure that firms can realize these advantages, policymakers must address the key energy bottlenecks to building the next generation of AI data centers in the United States.
Lack of firm energy is the bottleneck
The greatest bottleneck to building multi-gigawatt clusters is building new sources of firm (also known as “non-intermittent”) power generation. The sheer scale of power consumption required by modern AI systems is staggering. OpenAI’s GPT-4, released in 2023, was reportedly trained with 25,000 GPUs, consuming around 30 MW of power, about as much as 25,000 American households.27 By contrast, “Phase 5” of Microsoft and OpenAI’s plan, slated for deployment as soon as 2028, would reportedly require a single supercomputer with as much as 5 GW of power, or 167x the size.28
High-intensity computational tasks, like training AI models, demand an enormous amount of energy because of the specialized computer hardware involved. For instance, NVIDIA’s latest flagship AI GPU, the B200, consumes 1,000 watts at peak power.29 This is almost the average power consumption of an entire American household, from a device that fits within a shoebox.30
Despite this huge power consumption, electricity is only a fraction of the total cost of ownership of an AI data center. Last year, the capital cost required to build a data center using NVIDIA’s previous generation GPU, the H100, was about $45,000 per GPU.31 Amortized over four years, this comes to around $11,250 a year.32 The power required to run the data center, on the other hand, costs just 5% of that on a per-GPU basis.33 To maximize returns on the substantial capital expenditure for these accelerators, AI data centers aim to run close to full capacity around the clock. To do that, they need around-the-clock power, either through firm power generation or with sufficient battery storage.
However, the US faces significant barriers to securing this consistent energy supply due to limitations in how fast new transmission lines and power generation can be deployed. Existing power grids are not equipped to handle the sudden surge in demand from AI data centers, causing bottlenecks in power delivery. For example, developers in Oregon recently found that converting a shuttered tile factory into a data center would require an extensive “line and load study” by the local power company.34 They had previously assumed power would not be a problem for their plans, but decided to build their own power source “off the grid” to avoid the delay.35
Connecting new power generation and data centers to the grid often requires new transmission lines. However, the building of new transmission lines has slowed dramatically, with annual installations dropping from 4,000 miles in 2013 to closer to 500 miles today. It takes an average of 10 years to build a new transmission line, with some taking longer than 20 years.36 Every stage of building transmission infrastructure runs into difficulty, from constructing the generation source, to constructing the lines, all the way through interconnection. Regulatory and political factors often further complicate matters, as local communities often resist new energy infrastructure projects due to concerns over property values or environmental impact.37
To avoid long lead times in connecting to the grid, US companies are increasingly considering on-site power generation solutions. By generating power “behind the meter,” they can bypass strained grid infrastructure and the complexities of securing large-scale power delivery from external sources. However, building behind the meter is still a challenging endeavour. Although AI and computing firms face an imperative to deploy data centers quickly, they also have made commitments to decarbonize and are considering increasingly cheap gas alternatives, like geothermal, small modular reactors, and grid-scale solar.
We need an “all of the above” energy strategy
Computing firms have balance sheets healthy enough to make significant investments in energy infrastructure here in the US — but these investment decisions face an uncertain technological, regulatory, and geopolitical environment. That uncertain environment leads to delays in an infrastructure buildout, and those delays threaten both technological leadership and the investment value of energy projects, which typically have payback periods longer than the useful life of an AI data center.
Every energy option — whether well-established or a next-generation breakthrough — has its own set of risks and opportunities. Natural gas turbines are proven but face stretched supply chains, regulatory uncertainty, and price volatility. Nuclear is emissions-free, but it’s costly, and its supply chain has long-lead times. Advanced geothermal has tremendous potential but has yet to demonstrate gigawatt-level energy generation.
Given this reality, the appropriate federal policy is an “all of the above” energy abundance strategy that lowers the costs associated with investments in all energy types and enables individual companies to take risks on the technologies that align with their preferences.
Natural gas is viable for short-term deployment, but not without challenges
To meet immediate power needs, many companies are considering on-site natural gas plants. Investing in natural gas plants to power AI data centers is a straightforward business decision, especially given the rapid deployment potential of combined-cycle gas turbines, which can be built in as little as three years. However, while natural gas is likely to remain a key part of the overall energy mix, it faces uncertainties that could encourage investment in other technologies. Supply chain shortages, existing decarbonization commitments, regulatory uncertainty, and stranded asset risk all make the decision less straightforward than it might initially appear:
- Supply chain shortages: Demand for natural gas turbines has surged over the past few years, driven by demand for data centers. It was recently reported that power company GE Vernova could “sell out” gas turbines for the next ten years on data center demand alone.38 The surge in demand could create supply chain issues — late last year the Siemens CEO warned that shortages in turbine blades could threaten their capacity to meet demand.39
- De-carbonization commitments: Many American hyperscalers have also made commitments to decarbonize by 2030. Abandoning these commitments can lead to reputational risks and erode stakeholder trust, affecting relationships with customers, investors, and regulators who value sustainability.
- Uncertain regulatory environment: Long delays in the interconnection queue limit the confidence with which you can hedge your investment by eventually connecting a behind-the-meter plant to the grid, making money from the electricity market. While the Trump administration is unlikely to impose new environmental regulations on natural gas, that could change in a few years due to changes in government. And there are existing regulatory costs to consider, including a rule the EPA finalized last year requiring new gas plants to achieve 90% carbon capture by 2035.40 The incoming administration is almost certain to roll back this requirement. However, rescinding rules under the Administrative Procedures Act takes time, and rushing the process caused the first Trump administration to lose many lawsuits that administrations typically win.41 Compliance with such regulations could entail substantial additional costs and operational challenges, decreasing the value of the investment.
The natural gas generation sector as a whole is likely to grow and remain profitable. But these risks, along with gas price volatility and major reductions in the cost curve for other technologies, come together to present a significant “stranded asset” risk for natural gas plants, where the economic viability of the plant diminishes before the investment has been fully recovered. Investors could face substantial financial losses if the plant becomes obsolete or underutilized due to technological advancements or shifts in energy demand. According to some projections, the majority of new gas plants will face stranded cost risks by the early 2030s.
Stranded asset risks are especially salient in the case of AI data centers, where the life of the computing hardware used (the largest piece of the total cost of ownership) is around four years, and where better chips are now released every year.42 Contrast this to natural gas turbines, where the payoff period is closer to 20 years.43 In an environment of high uncertainty, compute firms will constantly reevaluate investment decisions — and the short useful lifetime of AI hardware presents natural decision points to reevaluate their energy mix. Across the lifetime of a behind-the-meter gas plant, the data center owner will have multiple opportunities to either refresh their hardware at their existing site, continuing to use power from the gas plant, or to set up the new hardware in a different location and use a more competitive source of power. At this point, the capital costs to set up the gas plant are “sunk” and shouldn’t play a role in future investment decisions. The tradeoffs that compute firms will face is between the continued operating costs of the natural gas plant (primarily fuel), and the capital expenditures to build or use a new source of energy (and associated operation costs). As the costs for grid-scale solar and battery storage have been decreasing exponentially, the choice may be less obvious in the near future.
Besides solar and battery storage, on-site natural gas plants will also have to compete with other sources of next-generation firm on-site power, such as enhanced geothermal and small modular nuclear reactors. Both technologies are seeing a wave of investment and could become cost-competitive with natural gas generation well before the cost recovery period of a new gas plant has concluded.44 Enhanced geothermal is especially promising for near-term deployment, thanks to advanced drilling and fracking technologies adapted from the oil and gas industry.45
Given the importance of speedy deployment, natural gas is still likely to be the chief near-term solution for computing firms. But this threatens the efforts from major AI companies to achieve net-zero carbon emissions. Computing firms are attempting to overcome this challenge with a trade, adding emissions-free energy generation to the grid in the future in exchange for the carbon sources they are adding to fuel their data centers now. This is a positive development, but newer emissions-free technologies face considerable barriers to development.
Nuclear and carbon capture face scaling challenges
In its Pathways to Commercial Liftoff report on advanced nuclear, the Department of Energy (DOE) underscored the challenges of building out a nuclear workforce and supply chain.46 The scale-up of advanced nuclear through 2030 faces significant workforce and supply chain constraints, even for relatively modest deployment targets. By 2030, the industry will need approximately 50,000 workers just for construction and manufacturing, including about 10,000 skilled craft workers (welders, electricians, etc). This represents a significant challenge, given the current nuclear workforce of around 100,000 primarily focuses on operating existing plants.
Supply chain constraints are particularly acute in the near term since building capacity takes significant time and investment. To support SMR deployment by 2030:
- The US currently has only about 2,000 MT per year of uranium mining/milling capacity, which would need to begin scaling up significantly by 2030 to support future deployments.
- Conversion capacity is limited to 7,000 MT per year at a single facility (Converdyn).
- Current fuel fabrication capacity of 4,200 MT per year would need to begin expanding.
- The US has no domestic large forging capacity, and even if all vendors obtained N-stamp certification, could only support about 3 GW per year of production.
- There is a lack of commercially available domestic High-Assay Low-Enriched Uranium (HALEU) fuel production capability, which is needed for many advanced reactor designs hoping to deploy by 2030.47
Supply chain constraints are especially challenging because developers must secure a robust order book to maintain expertise, increase construction productivity, and enable bulk ordering, which can lower first-unit costs by about 15% compared to a single, standalone build.48 Without this demand signal, suppliers are reluctant to invest in expanded capacity. However, potential customers are hesitant to commit without confidence in supply chain readiness, creating a challenging stalemate that could delay 2030 deployment goals.
For carbon capture, the challenges are similar.49 The workforce and supply chain challenges in both technologies are compounded by the “chicken and egg” problem — suppliers are reluctant to invest in expanded capacity without firm orders, while potential customers are hesitant to commit without a robust supply chain in place.
Geothermal has built-in advantages for scaling, but still needs to be demonstrated at scale
Advanced geothermal energy systems have built-in advantages that other next-generation technologies don’t: high-quality, shovel-ready resource areas, an established workforce, and deep supply chains.50 But geothermal systems have yet to demonstrate utility-scale generation. But once that demonstration occurs, geothermal should scale up quickly because it can rely on the deep workforce and supply chains of the oil and gas industry. It is also a source of near-limitless energy — vastly more than that available in fossil fuels.
Approximately 61% of the current oil and gas workforce has skills directly transferable to geothermal development and operations, with only minimal retraining needed.51 The workforce skills that readily transfer include drilling engineers and crews, reservoir engineers, well completion specialists, geologists and geophysicists, and project managers with subsurface experience. Transferable materials and technology for production from oil & gas include drilling rigs and equipment, well casing and cementing materials, downhole tools, reservoir modeling software, and seismic monitoring equipment.
A simplified hypothetical demonstrates a clear path to achieving 5 GW of geothermal energy production in a single region by 2030. Each Enhanced Geothermal System (EGS) triplet comprises one injection well that sends cold water into the fractured subsurface and two production wells which send hot water to the surface to produce electricity. A triplet is capable of generating 10 MW of power. To reach a production level of 5 GW, we would need to deploy 500 EGS triplets, which means drilling 1,500 wells total (assuming that the energy production capability of a triplet doesn’t increase).
Assuming that identified geothermal resources can produce energy, and that the necessary capital is available to develop them, there’s little reason to think the US workforce and shale supply chain can’t support an additional 1,500 wells in the next five years. Across the last five years, a far greater number of wells were drilled in each of the major shale regions in America.
If we convert historical rates of oil/gas well drilling to equivalent geothermal power potential, the number of wells drilled across these five regions over the last five years could support over 130 GW of geothermal power production.
Even assuming frictions associated with transferring workforce and supply chains from oil and gas to geothermal, an increase in 1,500 wells over five years should be absorbed quite easily by the industry, given the vast geothermal resources available in suitable regions.
Next-generation energy projects need better financing mechanisms
The companies at the forefront of artificial intelligence want to build data centers as quickly as possible, and they want clean, non-intermittent power to do it. However, next-generation technologies like small modular reactors (SMRs) and enhanced geothermal are being deployed too slowly, and investors are hesitant to support these projects due to the cost of uncertainty: the inherent risks involved in supporting new technologies. To move from small demonstration projects to deployment at the scale necessary for multi-gigawatt clusters, policymakers must find ways to lower the cost of uncertainty and make it straightforward for investors to support next-generation technologies.
The cost of uncertainty hinders power generation deployment
Financing energy projects traditionally requires three instruments: equity investments, debt, and offtake agreements (agreements from purchasers that guarantee future revenue). Equity investors are last to be paid and thus bear the most risk, but their participation often helps attract debt investors who provide the bulk of a project’s funding. Both debt and equity investments boost a project’s credibility, prompting purchasers to enter into offtake agreements. Those agreements then make it easier to bring in more equity and debt investors.
Securing these instruments is typically easier for established technologies with known costs, quantifiable risks, and performance characteristics than it is for innovative projects that face genuine uncertainty. Permitting and regulatory timelines, physical feasibility, and material bottlenecks cannot easily be projected for new power generation projects. This uncertainty creates cascading problems — investors are hesitant to commit large amounts of capital to projects that may take years to generate returns, or may never generate returns. This hesitancy creates a financing gap, slowing the deployment of power generation.
None of these participants — lenders, equity investors, and purchasers — wants to own the costs of uncertainty — the possibility that they’re on the hook for any number of scenarios where costs increase beyond what was previously expected. For any given project, investors will have a minimum acceptable rate of return to pursue a project — the “hurdle rate.” But when some costs have high uncertainty and are impossible to quantify in advance, investors will require extremely high hurdle rates, even if broader financial markets or models suggest a lower hurdle rate.52 This makes the financing gap particularly difficult to fill, as each potential source of capital has strong incentives to avoid bearing the burden of these uncertain outcomes. When cost overruns or delays occur, the question of who ultimately bears that cost becomes contentious, leading many financiers to simply avoid the sector entirely.
Debt financing proves even more challenging. Banks and other lenders typically require predictable cash flows and proven technology before extending loans. In a high interest rate environment, they become even more risk-averse, preferring to lend to established technologies like wind and solar that have extensive operational track records. Without this historical data to assess performance and reliability, lenders either decline to participate or demand prohibitively high interest rates.
Tax equity, which has been crucial for renewable energy deployment, is also difficult to secure. Tax equity investors — typically large banks and insurance companies — seek stable, low-risk investments that generate predictable tax benefits. They’re generally unwilling to take technology risks, preferring proven technologies that can reliably generate the tax credits they’re seeking to monetize. If an investor can get clean energy from a solar project rather than a small modular reactor (SMR), there is little additional value to be gained by taking on new technological and development risk.
Offtake agreements, such as power purchase agreements (PPAs), can also be challenging to obtain.53 Utilities, which are often the natural buyers of power, face strict regulatory oversight and must justify costs to their ratepayers. Both want to minimize technological and development risks, so they push onerous terms onto developers. These terms can make it more difficult for projects to secure equity or debt investments, which ultimately limits the speed of development.
Public utilities commissions, which regulate utility rates, further complicate matters. They must balance supporting innovation against protecting ratepayers from potential cost increases. When faced with uncertain technology that could lead to rate increases, they often opt for more conservative, proven alternatives. The result is a bleak environment for financing next-generation energy projects. Breaking the logjam likely requires either reducing the uncertainty or shared ownership of the uncertainty.
So far, computing firms are mostly not owning the cost of uncertainty
Next-generation energy technology firms are responsible for finding investors and purchasers to provide equity, debt capital, and offtake agreements for projects. All three financial instruments are typically present in development projects. On its own, filling out the puzzle of project finance with different forms of financing can be time-consuming. Firms are in a constant cycle of deciding to invest in incremental advances that reduce uncertainty. For example, Exxon uses a methodology known as “stage-gating process” that breaks the development process into steps with milestones along the way to reduce risk:54
While the methodology varies depending on the product type, some key points are generally applicable:
- Early stages might include academic research studying potential methods or applications, sometimes with government funding.55
- Once research has been tested or demonstrated some practical viability, developers might seek investment for a small demonstration project, from the government in the form of a grant or private investors, typically venture capital purchasing equity.
- Once a technological demonstration at a small scale is proven, the developer will try to deploy a “first-of-a-kind” demonstration project to show that the technology is viable at commercial scale. This will require more private investment and, depending on the size of the project, some debt financing from a lender, but typically either from government sources or private investors willing to undertake risk for some higher upside.
- If the developer succeeds in demonstrating technological viability at commercial scale, they can turn to developing second-of-a-kind (SOAK) and third-of-a-kind (THOAK) projects, in which they work on demonstrating cost reductions that can lead to eventual profitability.56 The viability demonstrated by the FOAK project may encourage a wider array of investors and debt financiers to participate.
- The final stage is wide commercial deployment for the “Nth-of-a-kind” projects, which are typically financed primarily through widely available debt from banks.
Each stage represents an incremental advance and a corresponding reduction in uncertainty. That reduction can bring in new investors. For example, advanced geothermal companies like Eavor and Fervo demonstrated viability of their designs with small-scale pilot projects, with Fervo’s Project Red producing 3.5 MW.57 Following these demonstrations, firms raise money from more investors for further, larger-scale demonstrations. Because each incremental stage is limited in scope, the cost of uncertainty is inherently limited — there’s only so much you can lose in a narrowly defined project. It also results in a laborious and time-consuming process to get to the commercial scale energy projects for new technology.
Accelerating this process will require investors to take on more uncertainty, sooner. Given the huge investments American hyperscalers are making in data centers, they are the most attractive candidates for sharing ownership of uncertainty with energy developers. So far, computing companies have shared that ownership almost exclusively through PPAs. Microsoft has signed a PPA with Helion Energy to purchase fusion energy by 2028, marking the first commercial commitment to buy fusion power.58 It has also signed a deal to restart the Three Mile Island nuclear plant, committing to buy 100% of the plant’s production for 20 years above market prices.59 Google has entered agreements to procure geothermal energy.60 Meta has signed PPAs with solar and storage projects totaling over 1.2 GW of contracted power as of December.61
Premium PPAs make sense for these firms — they enable hyperscalers to pay only for the power they use. But PPAs are most useful where technological risk is sufficiently low. Microsoft’s recent deal to pay a premium for Constellation Energy’s power at the Three Mile Island nuclear plant illustrates this.62 While restarting a plant involves uncertainty, its proven technology reduces risk compared to newer options. Microsoft agreed to buy 100% of the plant’s output over 20 years at a rate higher than market price. The premium justifies Constellation’s investment in restarting the plant for its shareholders, but it is a small cost for Microsoft, relative to other costs involved in building a data center. Because the specialized chips used in AI data centers are so expensive, the cost of electricity is only around 5% of the total cost of ownership.63
Securing a PPA for a next-generation energy project does not guarantee that it will attract enough upfront capital to build it. That’s because investors and developers cannot easily determine whether even a premium PPA can cover the cost of uncertainty. This was recently demonstrated by the failure of the deal between NuScale, the first SMR company to have its design approved in the US, and a consortium of utilities in Utah. The initial PPA was for 26% of production at $59/KWh. However, unexpected volatility in materials prices and interest rates dramatically raised project costs, leading NuScale to propose a new agreement for 80% of production at $89/KWh to make the project financially viable. The consortium declined, and the deal fell through, leaving a strategically important power generation technology strangled in its cradle.64 To build a future of abundant energy in America, the federal government must find ways to lower this type of uncertainty for investors, either through regulatory reform or with financing mechanisms that allow it to take some ownership of the uncertainty.
Upfront investments and debt financing are the path forward
A promising new structure for data center energy investment has been deployed by Amazon in a recent round of projects. The projects pair PPAs with a range of upfront investments: a $500 million direct investment in X-energy (an SMR firm), funding for a 320 MW SMR feasibility study with Energy Northwest (a utility consortium in the Pacific Northwest), and an initiative with X-energy to standardize deployment and financing models to more rapidly scale SMR deployment nation-wide.65
This use of multiple financing mechanisms must become more common to deploy next-generation power at scale. But to date, the investments mentioned above are the only upfront investments by a hyperscaler to deploy a next-generation energy technology. Part of the cost of uncertainty can be shared by big companies by investing directly in the project capital stack or by utilizing their strong credit ratings and balance sheets to raise debt in the capital markets for projects. But where the cost of uncertainty comes from structural policy barriers to building quickly, the federal government should act to lower them. Well-resourced companies in other industries have historically put capital upfront to support power generation for their energy-intensive main operation, especially in the metals and mining industries, which often have a willing partner in national governments. Anglo-American, one of the largest mining companies in the world, has directly financed energy projects for its mining/processing operations in Africa, as has B2Gold, a smaller Canadian mining company.66 American technology companies, with plentiful cash on hand, have a mostly untapped opportunity to deploy capital upfront to ensure sufficient energy supply for their operations.
With upfront investments and PPAs in place, the main remaining barriers to large-scale deployment of next-generation energy technologies are debt financing, licensing, and permitting. These are areas where decisive policymaker action is needed to ensure the future of AI is built in America.
The policy playbook for Special Compute Zones
We propose establishing “Special Compute Zones”: regions in the US where the federal government makes it easier to rapidly build AI training clusters at least 5 gigawatts in size, eventually totaling tens to hundreds of gigawatts across the country. Because training clusters can be flexibly located based on power availability, Special Compute Zones can be planned around areas where it is possible to build quickly: including but not limited to federal lands where local control is limited, areas with existing nuclear capacity or soon-to-be-retired coal sites (where large-scale energy support infrastructure already exists), and areas with high potential for next-generation geothermal production.67
This is an ambitious goal. Each 5 GW cluster would be around 50 times larger than the biggest AI clusters today, requiring the equivalent of multiple large nuclear power plants to support. But energy projects at this scale and speed are possible. In 1953, President Eisenhower gave his “Atoms for Peace” speech, laying out a plan for commercial nuclear power generation. Just ten months later, ground was broken on America’s first commercial nuclear power plant, using a reactor supplied by the US Navy. Three years later, the plant was complete.68 In the five years following Eisenhower’s speech, nine new nuclear power plants began construction in the United States, with an average build time of just four years.69 In contrast, prior to Plant Vogtle, which first started operating in 2023, no US civilian nuclear plant had started construction in 30 years, and the Vogtle plant was seven years late and $17 billion over budget.70
We must work to make the feats of the 1950s possible today. Our competitors are already at work. Between 2014 and 2019, China built 25 GW of new nuclear capacity. Many of these plants are the US-designed AP1000, or Chinese variants of the design.71
With concerted effort from the federal government, the United States has proven it can radically speed up the development and deployment of new technologies. Since 2000, the average development time of new drugs has been 10 years.72 But in 2020, thanks to Operation Warp Speed — a US public–private partnership to accelerate the development of COVID-19 vaccines — the first COVID vaccine was developed in just 9 months, more than 10x the normal speed, and far faster than any vaccine ever developed.73
To deploy new energy infrastructure that can support massive AI clusters, the US government must reduce the cost of uncertainty for computing firms: by sharing costs and by radically accelerating timelines for permitting and regulatory approval.
Alongside this support, the US government should also ensure that powerful AI clusters being built in America are built securely enough to protect breakthrough systems from being stolen, misused by bad actors, and used to support China’s AI development and military modernization. The AI technology currently being developed by American firms has the potential to re-shape the global balance of power. This technology is quickly becoming a top target for state-backed cyber attacks. Accelerating the deployment of multi-gigawatt clusters in America will count for little if the products of huge investments in chips and energy can easily be stolen by our adversaries. The security of breakthrough AI systems is heavily dependent on the security of the data centers used to develop and host them, but the infrastructure used to train and deploy today’s most powerful models is far from secure enough to defend against the most sophisticated attackers.
Alongside moves to help finance next-generation energy technologies and to drastically reduce permitting and build times, the federal government should require the builders, operators, and users of the next generation of AI data centers to invest in sufficient supply chain, personnel, hardware, and network security to defend against the most sophisticated attackers, minimizing the chance that breakthrough AI systems built in America can be stolen and misused, and protecting national security. The government should also commit to assistance in this endeavor: in design and red-teaming of secure data centers, clearing of personnel, and helping to secure critical supply chains.
Below are a series of actions the Trump administration can take to ensure the future of AI is both built in America, and good for America.
Executive branch actions
Here, we outline a coordinated federal strategy to expedite the development and deployment of AI infrastructure in the United States:
- Appoint a czar to coordinate the AI data center buildout.
- Conduct a comprehensive review of land and assets available for Special Compute Zones.
- Harness the Defense Production Act to solve AI infrastructure permitting and supply chain issues.
- Establish categorical exclusions to NEPA that accelerate federal financing.
- Launch new initiatives to radically improve the security of American AI infrastructure, protecting US AI technology against our adversaries.
Together, these actions form a comprehensive, unified approach to rapidly expand America’s advanced computing capacity, while balancing national security, economic interests, and environmental considerations.
1. Appoint an AI data center czar
This project would be one of the most ambitious infrastructure projects in American history, equalling or exceeding the scale of the Apollo Program or Interstate Highway System. Coordinating federal policy, supply chains, and a vast array of private stakeholders to deliver gigawatt-scale clusters in special compute zones will be a complex undertaking, requiring a dedicated point person with strong leadership and vision.
We propose establishing a new role — an “AI data center czar”. The position would require an individual with three key qualifications: executive branch experience to navigate federal bureaucracy, deep understanding of energy infrastructure to address power requirements, and private sector experience to facilitate public-private partnerships. Their mandate would include coordinating the identification of federal lands for development, streamlining permitting processes through categorical exclusions, leveraging existing energy infrastructure through programs like the DOE’s Loan Programs Office’s Energy Infrastructure Reinvestment program,74 and working with industry to radically improve the security of American AI infrastructure.
The czar would serve as the primary liaison between agencies including NIST, the Departments of Energy, Defense, and Interior, ensuring consistent policy implementation while protecting national security interests. Their office would oversee the comprehensive review of federal assets suitable for AI infrastructure, similar to recent supply chain assessments, while establishing clear processes for private sector engagement. By consolidating oversight of these initiatives under a single office, the federal government can accelerate deployment of advanced computing infrastructure while maintaining environmental and security standards.
2. Conduct a comprehensive review of land and assets available for Special Compute Zones
The federal government should undertake a comprehensive interagency review of federal lands and assets suitable for AI data center projects. Such a review would identify both available locations and existing energy infrastructure, drawing on successful precedents to guide policy decisions and catalyze private-sector investments.75 The output of such a review would be to identify assets available for special compute zones. While federal lands are an important federal resource, the federal government should also work with the private sector to identify potential assets, public or private, that could be utilized for energy infrastructure through existing federal programs, like the Title XVII programs at the Loan Programs Office.
A recent example might offer a model of private-public sector collaboration to leverage federal assets with private sector investment. Amazon’s collaboration with X-Energy and Energy Northwest, reportedly near the Hanford site in Washington, illustrates how a location steeped in nuclear history can pivot to support cutting-edge technology.76 Hanford, once the world’s first large-scale plutonium production facility, was instrumental in the development of nuclear weapons during World War II. While it later became the most contaminated nuclear site in the country, cleanup efforts led by the Department of Energy (DOE) and partnering agencies have made substantial progress, turning Hanford into a unique resource for nuclear infrastructure.
Today, part of Hanford remains under DOE ownership and has been leased to Energy Northwest, home to the Columbia Generating Station — the only operating nuclear power plant in the Pacific Northwest. Siting a small modular reactor (SMR) project there is likely a strategic decision to use a built-in infrastructure for all of the challenges associated with nuclear energy. Though much work remains to remediate sections of Hanford and manage high-level radioactive waste, the site’s transformation indicates how previously restricted or heavily impacted federal lands can be repurposed for new energy-intensive ventures. This same principle can be applied to large AI data centers: federal properties that possess built-in energy infrastructure or that lie near transmission corridors can offer developers substantial savings in time, cost, and risk.
The executive branch can take the following actions to identify land and assets for Special Compute Zones.
Identify federal lands available for development
The first goal of the proposed interagency review is to compile an inventory of federal lands suitable for AI data center deployment. Like the 100-day supply chain review that assembled a comprehensive view of vulnerabilities in critical industries, this process would bring together agencies such as the DOE, Department of the Interior, Department of Defense, and the General Services Administration. Each would detail which parcels of land are either currently underutilized or have the potential to support new data center facilities. This unified effort would highlight previously overlooked sites and help inform private-sector decisions, much as the supply chain review helped identify manufacturing bottlenecks.
Identify existing energy infrastructure for Section 1706 projects
An equally critical objective is to catalog existing energy assets that could be upgraded or repurposed under the DOE’s Loan Programs Office (LPO) authority in Section 1706 of Title XVII of the Energy Policy Act of 2005. Section 1706 targets reinvestment in underutilized or carbon-intensive facilities, from idled coal plants to legacy transmission networks, to make them cleaner and more efficient. Many of these facilities reside on or near federal land, or within regions that have historically depended on federal power marketing administrations. By modernizing these sites — through advanced generation technologies, energy storage, or other systems — developers can meet the scale required for AI training clusters while tapping into existing rights-of-way, reducing both permitting hurdles and project timelines.
Identify previously disturbed lands for categorical exclusions
A third priority is identifying previously disturbed lands that qualify for expedited development. Under the 2005 Energy Policy Act, certain projects on disturbed or developed sites can receive categorical exclusions from lengthy environmental reviews. Recent reforms have expanded categorical exclusions for federal agencies, especially the DOE. Changes to key categorical exclusions now allow for powerline relocations, energy storage facilities, and solar PV installations within previously disturbed areas, removing onerous acreage limitations and focusing on practical mitigation.77 Furthermore, following the 2023 Fiscal Responsibility Act, agencies can now adopt the categorical exclusions of other agencies78 — for example, the DOD could adopt categorical exclusions issued by the DOE. By clearly designating such lands for AI data center deployment, the government would lower the cost of uncertainty by simplifying the regulatory process — an approach that directly benefits large-scale, capital-intensive AI projects.
Identify land available for acquisition
Policymakers should also study options to acquire land or property through the contractual authority under Section 161g of the Atomic Energy Act. 42 U.S.C. 2201(g) acquire property through various means including purchase, lease, condemnation, or otherwise. However, the scope of this authority has important limitations: the acquisition must be “for the purpose of the Act,” meaning it must serve the purposes outlined in the Atomic Energy Act, which primarily concern nuclear materials and facilities. While the authority includes condemnation power, this must be exercised within constitutional bounds of “public use” under the Fifth Amendment’s Takings Clause. The “or otherwise” language has been interpreted to include methods like easements and rights-of-way, but not unlimited acquisition methods. The authority requires property acquisition be “necessary or desirable” to accomplish statutory purposes, a standard that provides some flexibility but isn’t unlimited. The power is subject to general appropriations limits and other fiscal constraints.
Courts have generally deferred to DOE’s determination of necessity under this provision, but have required clear connection to statutory purposes. The provision’s scope primarily covers physical property and related rights, not broader regulatory authorities or commercial activities beyond those specifically authorized elsewhere in the Act.
3. Use the Defense Production Act to streamline permitting and resolve supply chain issues
The DPA grants the president broad authority to intervene in the economy to ensure the supply of materials deemed essential to national defense, which has been interpreted to include critical infrastructure and emerging technologies.
Although the development of advanced AI computing infrastructure is a national security priority, companies seeking to build large-scale data centers face significant regulatory hurdles, including environmental review processes under the National Environmental Policy Act (NEPA).
Why does NEPA matter for the AI data center buildout?
The National Environmental Policy Act (NEPA), requires that all “major federal actions” undergo an environmental analysis prior to approval. This includes any data center and associated energy infrastructure projects that intersect federal lands or receive federal funding. Over time, the scope of NEPA has evolved to a point where it imposes massive costs on the federal government, drags new energy projects out for years, and generates uncertainty that stops many projects from ever getting off the ground.79
The Defense Production Act (DPA) offers a pathway for the executive branch to expedite the buildout of critical AI infrastructure by offering creative financing structures to share the cost of uncertainty and by streamlining procedural permitting laws to lower the cost of uncertainty.
AI’s relevance for national defense
AI has a long history of importance for national defense. In 1991, DARPA released the “Dynamic Analysis and Replanning Tool” (DART) for optimizing defense logistics. Within four years, cost reductions from DART optimizations had, by themselves, paid for all DARPA AI research from the previous 30 years.80 More recently, machine learning-based systems have been deployed by the DOD for intelligence, surveillance, sensing, and navigation for autonomous vehicles.81
Next-generation AI promises to transform national defense by swiftly processing data from the multitude of sensor, drone, and communication streams generated on a modern battlefield, and using it to identify threats, predict enemy movements, and coordinate operations in real time.
Because the most powerful AI systems are developed by private firms, much of DoD’s future capabilities will likely come from models trained in data centers operated by private firms. A few examples: the developer OpenAI recently announced a partnership with Anduril to bring its advanced models directly to the battlefield;82 data annotation company Scale AI has built a version of Meta’s Llama 3, dubbed “Defense Llama” to assist with military planning and decision-making;83 intelligence software company Palantir is building platforms that enable military organizations to integrate a variety of advanced commercial models into their operations.84
Given the clear importance of AI to national defense, there are many ways the DPA can be used to ensure a secure supply of AI capabilities to support national security.
Title I – Prioritization
With the Defense Production Act’s Title I authority, the federal government has to solve AI data center supply chain issues with foremost expediency. The President can require suppliers to prioritize contracts deemed essential to national defense. By designating gas turbines, rankine cycle turbines, high-voltage transformers, or switchgear for data centers as “rated orders,” vendors would have to fulfill these contracts before other commercial obligations. This prioritization authority could significantly compress timelines for data center power infrastructure projects, which currently face multi-year backlogs for essential equipment.
The scope of Title I extends throughout the supply chain, allowing prioritization not just of end products but also of component parts and raw materials. This means that beyond expediting delivery of completed transformers or turbines, DPA authority could accelerate production of silicon steel for transformer cores, specialized cooling systems, and other critical subcomponents.
Title III – Lending with streamlined permitting and security requirements
To encourage private sector investment in defense capabilities, Title III of the DPA empowers the president to provide financial incentives and support for the expansion of productive capacity in strategically important industries.85 That authority has not yet been utilized to support AI data center development and security. Since the DPA defines “national defense” as “programs for military and energy production or construction,” under Title III, the President may use a suite of tools including lending, purchase agreements, and flexible public-private partnerships. With the DPA, the federal government could design the appropriate lending arrangements and permitting assistance necessary to support energy deployment for AI data centers. This assistance should be offered alongside requirements for firms to invest in security measures that protect the AI technologies they are developing from sophisticated attackers and coordinate with the US government on their implementation.
Federal infrastructure and technology investments routinely include robust safety and security conditions, reflecting the government’s commitment to protecting critical assets and public interests. The template for DPA Technology Investment Agreements includes an entire section with standard requirements that recipients of Title III funding for TIAs must comply with.86 Furthermore, the principle of attaching security conditions to funding for infrastructure can be seen in a host of other federal programs.
The Grid Resilience Formula Grants under the Infrastructure Investment and Jobs Act exemplify this approach by requiring that “the resilience investments under the Program should be in alignment with cybersecurity standards and best practices, including the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) standards and National Institute of Standards and Technology (NIST) Cybersecurity Framework.”87 This represents a direct link between federal support and enhanced grid protection. Attaching cybersecurity requirements with grid modernization funding is hardly new. Under the 2009 Smart Grid Investment Grants program, all grantees of the program were required to “implement comprehensive cybersecurity plans and build cybersecurity into their policies.”88
The CHIPS and Science Act provides a recent example in the semiconductor sector, where security conditions extend beyond cybersecurity to encompass national security interests. Applicants for CHIPS funding must certify that they will not expand certain advanced manufacturing or research activities in “countries of concern” (e.g., China and Russia) and that they will abide by American export control laws.89 They also required facilities constructed or upgraded with CHIPS funds to demonstrate how they would protect proprietary manufacturing processes and sensitive research data.90 The Commerce Department can require regular reporting on security measures and has authority to claw back funds or impose penalties if recipients fail to adhere to the conditions.91
These examples demonstrate how federal funding can serve as a lever to enhance security across critical sectors. By making security requirements a precondition for investment, the government ensures that modernization and expansion efforts automatically incorporate protective measures. This approach has evolved from basic safety standards to encompass sophisticated cybersecurity protocols and national security considerations, reflecting the changing nature of threats to critical infrastructure and technology. For an overview of security measures relevant to the AI data center buildout, see “Launch new initiatives to radically improve the security of American AI infrastructure” below.
The Defense Production Act (DPA) Title III lending authority provides pathways to streamline or modify environmental permitting requirements that typically govern infrastructure development.92 Given the necessity to protect the nation’s security, the activities described in this Act would likely receive a strong degree of deference from the federal judiciary. Weinberger v. Catholic Action of Hawaii established an important precedent regarding NEPA compliance in national security contexts. The ruling indicates that courts may limit remedies for NEPA violations when national security concerns are present. This precedent provides critical flexibility for urgent national priority projects to proceed even when NEPA compliance may be incomplete. The limitation of remedies means that the activities would likely avoid the “litigation doom loop” that delays and adds costs to projects.93
National Environmental Policy Act (NEPA) emergency provisions
The DPA office could utilize NEPA’s emergency circumstances provision (40 C.F.R. § 1506.11) to implement alternative environmental review arrangements for urgent projects. This process requires consultation with the Council on Environmental Quality (CEQ) but allows agencies to proceed with actions that may have significant environmental impacts without following the conventional, time-intensive NEPA review process. This authority has historical precedent, particularly in national security contexts, as evidenced by Department of Defense operations during the 1990 Gulf War. The provision enables agencies to respond rapidly to urgent national priorities while maintaining a basic level of environmental oversight through CEQ consultation.
National security considerations and classified information
Leverage NEPA’s robust protections for classified and sensitive actions under 40 C.F.R. § 1507.3(d) to restrict public access to environmental assessments and impact statements addressing classified proposals. This authority is further strengthened by Freedom of Information Act (FOIA) Exemption 1 and the State Secrets Privilege, as established in Kasza v. Browner. These legal protections enable agencies to shield sensitive project details from public disclosure, effectively limiting the ability of potential opponents to challenge projects through litigation, as the details of the NEPA review remain confidential.
Endangered Species Act national security exemption
Invoke Section 7(j) of the Endangered Species Act by securing a determination from the Secretary of Defense that the project is necessary for national security. This provision is particularly powerful as it requires no discretionary review and allows projects to proceed regardless of species impacts when deemed essential for defense purposes.
DPA’s “Without Regard Clause”
Title III lending can be exercised “without regard to the limitations of existing law,” potentially allowing agencies to bypass or modify regulatory requirements that could delay critical projects.94 Courts have generally interpreted similar clauses in other statutes to grant broad discretion to override conflicting legal requirements. While the scope of this override power is not unlimited, it likely extends to procedural requirements like those in NEPA when exercising core DPA authorities. The DPA may also provide the authority to streamline or waive environmental review — agencies might use DPA authority to create a streamlined environmental assessment process for qualifying data center projects, or to waive certain NEPA requirements altogether for high-priority facilities. This would be a break from current practice — DPA projects have required NEPA reviews to date, and federal regulations require it. It could also allow, if necessary, the federal government to preempt state and local restrictions that could impede data center development, such as zoning restrictions or state-level environmental regulations.
These various legal authorities provide powerful tools for expediting essential projects while maintaining appropriate environmental oversight. The key lies in thoughtful application that recognizes both the urgency of national security needs and the importance of environmental stewardship.
4. Establish categorical exclusions to NEPA that accelerate federal financing
Because Congress has not significantly reformed permitting, the Biden Administration used categorical exclusions from NEPA to reduce review timelines for projects with minimal environmental impacts from months or years to weeks. In fiscal year 2022, over 99.5% of federal highway projects used categorical exclusions.95 Importantly, a categorical exclusion from NEPA does not imply no environmental review or regulations — NEPA is a procedural law with no substantive standards.96 Any actions categorically excluded are still subject to the substantive environmental laws of the United States like the Clean Water Act or the oversight of the Nuclear Regulatory Commission. Categorical exclusions are also significantly less costly and time-consuming than other forms of review under NEPA (environmental assessments and environmental impact statements).97
Absent reform in Congress, the executive branch can target categorical exclusions to minimize the regulatory burden on AI data centers and their energy sources.
Aside from delays created by the litigation doom loop, the NEPA process can stall investment for activities that have negligible or nonsignificant environmental impacts such as material acquisition or site characterization. Because project financing timelines rarely align with the lengthy NEPA reviews, developers face difficult choices: either incur significant uncertainty by pushing forward or delay project development at great cost.
One example is how the NEPA review process delays the disbursement of loan proceeds for eligible project costs. For timely development, some eligible project costs must be incurred by developers well before an agency can disburse the funds under NEPA. As a result, any uncertainty associated with those cost activities is wholly on the developer. Consider a hypothetical scenario where a company seeks a Title XVII innovative clean energy loan for a small modular nuclear reactor project: they would need to invest hundreds of millions of dollars in preliminary work — including design, siting, permitting, and Nuclear Regulatory Commission approvals — before receiving any LPO funding. Some of these costs are eligible project costs for an LPO loan, but can’t be disbursed before the full NEPA approval is completed, which is functionally tied to NRC approval. Therefore, while some of these costs (and the uncertainty associated with them) are intended to be shared with LPO, the procedural impediment prevents it.
Under the current structure, developers must front a range of expenses that technically qualify as eligible LPO costs, yet cannot be reimbursed until after NEPA approval. If the regulatory process takes longer than expected or fails to proceed, these costs — including, for example, early materials acquisition — remain solely on the developer. In nuclear projects, critical components may need to be purchased years in advance to ensure they are ready by the time construction can begin, compounding the financial risk. Even though these activities often involve no physical environmental impacts (like design) or relatively minor intrusions (such as drilling boreholes), the equity investor still shoulders full responsibility if the project does not ultimately materialize. This heightened exposure to uncertainty undermines private sector enthusiasm and impedes project development.
A more balanced approach could involve DOE issuing categorical exclusions for activities that don’t have material environmental impacts. In essence, the loan would be structured like a milestone-based contract, where funding is released in tranches as specific project benchmarks are met. This would distribute risk more equitably between the private sector and DOE, particularly during the crucial design phase. Such a restructuring would make the program more attractive to potential investors and developers while maintaining appropriate oversight and environmental protection standards.
Such an arrangement is not without precedent: it is analogous to existing “routine financing” categorical exclusions where no physical activity is conducted. For example, The Rural Utility Service (RUS) at USDA has a CE for Routine Financial Actions, including financial assistance for the purchase, transfer, lease, or other acquisition of real or personal property when no or minimal change in use is reasonably foreseeable, financial assistance for operating (working) capital for an existing operation to support day-to-day expenses, sale or lease of Agency-owned real property, and additional financial assistance for cost overruns.98 DOI has a CE that excludes routine financial transactions, including financial assistance.99 In each case, the agency has recognized that categorical exclusions are appropriate when there are no physical activities associated with the support.
5. Launch new initiatives to radically improve the security of American AI infrastructure
An AI data center czar should also work with federal agencies to level up security for AI data centers. Alongside moves to help finance next-generation energy technologies and to drastically reduce permitting and build times, the federal government should require the builders, operators, and users of the next generation of AI data centers to invest in sufficient supply chain, personnel, hardware, and network security to defend against the most sophisticated attackers, minimizing the chance that breakthrough AI systems built in America can be stolen and misused, and protecting national security. This should include complying with relevant technical standards, best practices, and certification processes such as:
- DOD’s Cybersecurity Maturity Model Certification (CMMC) Program (Level 3), a certification process for ensuring defense contractors have implemented sufficient security measures to safeguard controlled unclassified information. Level 3 outlines measures needed to protect the most sensitive information against advanced persistent threats, or APTs (typically highly sophisticated attackers such as state-sponsored hacking groups).100
- The “Optimal” level of CISA’s Zero Trust Maturity model, which defines best practices for controlling access to sensitive data. Elements that are key to the optimal level include automated monitoring and reporting, and a principle of “least privilege access” across the organization (a user or entity should only have access to the specific data, resources and applications needed to complete a required task).101
- The “FedRAMP High Impact Level” security baseline, which is used to assess the compliance of cloud computing services with a set of security assessment, authorization and monitoring standards intended to protect highly sensitive unclassified data.102 The specific standards are derived from NIST’s SP 800-53 standards for security and privacy controls for information systems and organizations.103
- NIST’s Secure Software Development Framework (SSDF), which provides best practices for minimizing software-level vulnerabilities across the software development lifecycle.104
- NIST’s AI-focused addendum to the SSDF, which adds best practices relevant to dual-use foundation models.105
- NIST’s SP 800-171 standard for protecting controlled unclassified information.106
- NIST’s FIPS 140-3 standard for designing and operating computer hardware that processes and protects sensitive data.107
While these standards could provide a useful baseline, they do not provide a complete and well-targeted set of security measures for protecting AI data centers against the most sophisticated attackers. Therefore, the US government should flexibly assist the builders, operators, and users of advanced AI data centers directly, in areas such as:
- Creating points of contact in the intelligence community to advise the builders and operators of advanced AI data centers on vulnerabilities to exfiltration, sabotage, and denial operations.
- Creating a public-private working group focused on AI infrastructure security, including relevant consortia such as the Frontier Model Forum.108
- Developing a counterintelligence playbook and monitoring capability to actively defend against and adapt to nation-state-level attacks on advanced AI data centers.
- Assisting with the design of secure data centers and inter-data center network infrastructure. DOE and DOD have experience designing secure clusters. NSA can help define threat models and red-team designs.
- Instructing NIST to identify gaps between the level of security required to adequately defend today’s advanced AI hardware devices and networks against sophisticated attackers, and the measures within current security standards for computer hardware, such as FIPS 140-3.109
- Establishing a red team to conduct penetration testing of advanced AI data centers. DOD, NSA, CIA, FBI, CISA, and USCYBERCOM could help regularly and rigorously test AI infrastructure for vulnerabilities, using the most sophisticated known attacks.
- Developing a timely and effective background screening process for roles that involve access to sensitive hardware or data at advanced AI data centers. FBI, NSA, CIA, DHS, and DOD could assist with rigorous background screening of all personnel who will have access to sensitive information or hardware produced by or hosted in a benefiting project, and monitoring for insider threats.
- To improve supply chain security, intelligence agencies (DOD, NSA, FBI, CIA) could help track shipments, provide physical security for shipments, and screen devices entering data centers to protect against supply chain attacks.
- Launching new research programs at the DOE National Laboratories, DARPA, and IARPA, to radically accelerate the state-of-the-art in security for AI hardware devices, including cluster-scale confidential computing and protection from invasive and non-invasive physical attacks.
For more discussion of security considerations and measures, see the security section of the previous piece in this series.
-
A cluster is a group of interconnected computers that can process the same workload, e.g. training an AI model. Clusters at this scale will likely span multiple data centers. For an overview of data center buildout plans within industry, see Tim Fist and Arnab Datta, “How to Build the Future of AI in the United States,” Institute for Progress, October 23, 2024. See also the recent "Stargate" project announcement: OpenAI and SoftBank, “Announcing The Stargate Project,” OpenAI, January 21, 2025
-
Sharveya Parasnis, “Bank of China Announces Investments Worth 1 Trillion Yuan to Develop AI Industry,” MediaNama, January 28, 2025
-
This has already begin to happen: Will Wade, “AI Needs So Much Power That Old Coal Plants Are Sticking Around,” Bloomberg, January 25, 2024; For data on retiring coal plants, see: U.S. Energy Information Administration, “Preliminary Monthly Electric Generator Inventory (based on Form EIA-860M as a supplement to Form EIA-860),” January 24, 2025.
-
Jenny Martos, “Leading three manufacturers providing two-thirds of turbines for gas-fired power plants under construction,” Global Energy Monitor, August 2024; DER Task Force (@DER_Task_Force), “GE Vernova reportedly doesn't need a single gas turbine order from a utility for the next 10 years and they'd still be sold out selling to data centers,” X, April 5, 2023; Interviews with two gas turbine industry experts suggest similar supply chain constraints throughout the industry.
-
Chirag Lala & Yakov Feygin, “Reacting to Uncertainty.” Center for Public Enterprise, August 16, 2024.
-
Arnab Datta & Skanda Amarnath, “How Public Policy Accelerated the Shale Revolution.” IFP & Employ America, November 8, 2023.
-
U.S. Energy Information Administration. “China continues rapid growth of nuclear power capacity.” Today in Energy, n.d. Accessed January 26, 2025.
-
Measured from start of clinical testing to approval.
-
Gaurav Agrawal et al., “Fast-forward: Will the Speed of COVID-19 Vaccine Development Reset Industry Norms?,” McKinsey & Company, May 13, 2021.
-
Exec. Order No. 14141, “Advancing United States Leadership in Artificial Intelligence Infrastructure,” Vol. 90 Fed. Reg. 11, Page 5469. (Jan. 17, 2025).
-
Sebastian Moss, “Microsoft & OpenAI Consider $100bn, 5GW ‘Stargate’ AI Data Center: Report.” DataCenterDynamics, n.d. Accessed January 26, 2025; “BlackRock, Global Infrastructure Partners, Microsoft, and MGX Launch New AI Partnership to Invest in Data Centers and Supporting Power Infrastructure.” Microsoft News, September 17, 2024; William Wade, “Microsoft to Pay Hefty Price for Three Mile Island Clean Power.” Bloomberg, September 25, 2024; “The Golden Opportunity for American AI.” Microsoft On the Issues,” Microsoft (blog), January 3, 2025.
-
OpenAI and SoftBank, “Announcing The Stargate Project”
-
Tim Fist and Arnab Datta, “How to Build the Future of AI in the United States.”
-
Japan’s power generation averaged 115 GW in 2023. In the previous piece in this series, we estimated global AI data center power consumption growth from 2024 to 2030 as 130 GW, based on AI chip production forecasts. Id.;
-
Though some projects have made public commitments to building in the US: OpenAI and SoftBank, “Announcing The Stargate Project”
-
“2024 Sustainability Report,” Meta; “2024 Environmental Sustainability Report,” Microsoft; “2024 Environmental Report,” Google.
-
Last year, the think tank RAND released a report comparing the security practices of AI developers and the measures that would be required to defend against a variety of attackers. The report concludes that protecting AI models against sophisticated attackers will require investments in security “well beyond what the default trajectory appears to be.” Sella Nevo et al., “Securing AI Model Weights,” RAND, May 30, 2024. For a list of related security incidents and case studies, see the previous piece in this series.
-
Energy project development on federal land automatically triggers review under the National Environmental Protection Act (NEPA), which has historically substantially slowed the pace of new energy projects. Aidan Mackenzie, “How NEPA Will Tax Clean Energy,” Institute for Progress, July 25, 2024.
-
Per the EO, "the construction of AI infrastructure must be matched with new, clean electricity generation resources.” Exec. Order No. 14141, “Advancing United States Leadership in Artificial Intelligence Infrastructure,” Vol. 90 Fed. Reg. 11, Page 5469. (Jan. 17, 2025).
-
Tim Fist and Arnab Datta, “How to Build the Future of AI in the United States.”
-
Specifically, of all models released since 2020 that were the most compute-intensive model at the time of their release, around 70% were developed by US-based firms, according to publicly available data from “Data on Notable AI Models,” Epoch AI. Accessed 13 Oct 2024.
-
U.S. Energy Information Administration, "Frequently Asked Questions (FAQs): What is U.S. electricity generation by energy source?" EIA, n.d. Accessed January 26, 2025.
-
Brooke Masters, Antoine Gara, James Fontanella-Khan, and Stephen Morris, “BlackRock and Microsoft plan $30bn fund to invest in AI infrastructure,” Financial Times, September 17, 2024; Sebastian Moss, “Scala AI City: Scala pitches $50bn Brazil data center campus of up to 4.7GW.”
-
Dylan Patel, Daniel Nishball and Jeremie Eliahou Ontiveros, “AI Datacenter Energy Dilemma – Race for AI Datacenter Space.” SemiAnalysis (blog), March 13, 2024.
-
The list included historical allies such as Saudi Arabia and Jordan. Hanna Dohmen and Jacob Feldgoise. “A Bigger Yard, A Higher Fence: Understanding BIS’s Expanded Controls on Advanced Computing Exports.” Center for Security and Emerging Technology (CSET). December 4, 2023; See also: Bureau of Industry and Security (BIS), U.S. Department of Commerce. “Commerce Updates Validated End-User (VEU) Program Eligible Data Centers to Bolster U.S.” Press release, n.d. Accessed January 26, 2025.
-
Bureau of Industry and Security, Department of Commerce, "Framework for Artificial Intelligence Diffusion," Federal Register 90, no. 10 (January 15, 2025): 4544–4584.
-
Dylan Patel and Gerald Wong, “GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE,” SemiAnalysis, July 10, 2023.
-
Anissa Gardizy and Amir Efrati, “Microsoft and OpenAI Plot $100 Billion Stargate AI Supercomputer,” The Information, March 29, 2024.
-
Ryan Smith, “Nvidia Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger with Smaller Data.” AnandTech, n.d. Accessed January 26, 2025.
-
The average American household consumes around 11,000 kWh of electricity per year, requiring an average of 1,200 W of power.
-
Dylan Patel and Daniel Nishball, “GPU Cloud Economics Explained.” SemiAnalysis, December 4, 2023. Accessed January 26, 2025.
-
We use an amortization period of 4 years based on reports of the average lifetime of data center GPUS: "Typical Lifespan of an NVIDIA Data Center GPU,” Massed Compute, Accessed October 2, 2024; George Ostrouchov et al., “GPU Lifetimes on Titan Supercomputer: Survival Analysis and Reliability,” SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, February 22, 2021.
-
Assuming an industrial electricity price of 9 cents per kWh, a power utilization rate of 80%, and power overheads of 1.25x server power consumption, an H100 GPU with 700 W of peak power consumption costs 700 * 0.8 * 1.25 * 24 * 365 * (0.09 / 1000) = $552 for a year of operation; “NVIDIA DGX H100 | Datasheet,” NVIDIA, Accessed October 12, 2024; Dylan Patel, Daniel Nishball, and Jeremie Eliahou Ontiveros, “AI Datacenter Energy Dilemma - Race for AI Datacenter Space.”
-
Evan Halper, “The AI Data Centers Power Dilemma.” The Washington Post, March 7, 2024.
-
“Is America’s Power Shortage the Fault of Data Centers?” LowEndBox (blog), n.d. Accessed January 26, 2025.
-
“Queued Up… But in Need of Transmission,” US Department of Energy, April, 2022.
-
Josh Kurtz, “Maryland Data Center Conference Gets Caught Up in Power Line Controversy.” WTOP, August 2024. Accessed January 26, 2025.
-
Julian Wettengel, “Turbine Production Bottleneck Threatens Plans for New German Gas Power Plants – Industry.” Clean Energy Wire, n.d. Accessed January 26, 2025.
-
“Biden-Harris Administration Finalizes Suite of Standards to Reduce Pollution from Fossil Fuel.” U.S. Environmental Protection Agency (EPA), n.d. Accessed January 26, 2025.
-
Fred Barbash and Deanna Paul, “The Real Reason President Trump Is Constantly Losing in Court.” The Washington Post, March 19, 2019.
-
Tsing Mui, “NVIDIA Switches to 1-Year Cadence for NewGPUs, New CPUs, and More,” The FPS Review, May 27, 2024; "Typical Lifespan of an NVIDIA Data Center GPU,” Massed Compute, George Ostrouchov et al., “GPU Lifetimes on Titan Supercomputer: Survival Analysis and Reliability.”
-
High efficiency combined-cycle gas turbines have higher capital costs but save on operating expenses and can pay back as quickly if natural gas prices are stable and low. Recent studies estimate a payback period for natural gas turbines of 9-17 years. However, while the US benefits from low natural gas prices at present, export policy could raise prices closer to global prices, increasing the payoff period. Bin Miao, Siew Hwa Chan, "The economic feasibility study of a 100-MW Power-to-Gas plant," International Journal of Hydrogen Energy, Volume 45, Issue 18, 1 April 2020, Pages 10977.
-
Tim Fist and Arnab Datta, “How to Build the Future of AI in the United States.”
-
Arnab Datta, “Hot Rocks: Commercializing Next-Generation Geothermal Energy,” IFP, October 30, 2023.
-
U.S. Department of Energy, “Advanced Nuclear – Pathways to Commercial Liftoff,” Liftoff Reports, October 2024.
-
Brian Martucci, “Congress passes Russian uranium import ban, unlocking $2.7B to expand US nuclear fuel production,” Utility Dive, May 6, 2024.
-
Bolisetti, C., Abou Jaoude, A., Hanna Bishara Hanna, B. N., Larsen, L. M., Zhou, J., & Shirvan, K. (2024), "Quantifying Capital Cost Reduction Pathways for Advanced Nuclear Reactors," (No. INL/ RPT-24-77667-Rev000), Idaho National Laboratory (INL).
-
U.S. Department of Energy, “Carbon Management – Pathways to Commercial Liftoff,” Liftoff Reports, October 2024.
-
Wilson Ricks and Jesse D. Jenkins, “Pathways to National-Scale Adoption of Enhanced Geothermal Power Through Experience-Driven Cost Reductions,” Princeton University ZERO Lab, September 20, 2024.
-
U.S. Department of Energy, “Pathways to Commercial Liftoff: Next-Generation Geothermal Power,” March 2024.
-
Arnab Datta and Skanda Amarnath, “How Public Policy Accelerated the Shale Revolution,” Institute for Progress, November 8, 2023.
-
A power purchase agreement or PPA is a contract between the generator of power and the buyer, setting a fixed price for the buyer to purchase energy from the generator across a certain time period.
-
ExxonMobil, “ExxonMobil invests $1 billion per year in energy research, emerging technologies,” ExxonMobil, September 18, 2018.
-
Brian Potter, “How We Got the Lithium-Ion Battery,” Construction Physics, November 29, 2024; Brian Potter, “What Learning by Doing Looks Like,” Construction Physics, December 12, 2024.
-
David Yeh, “From FOAK to NOAK,” CTVC, April 19, 2024.
-
Fervo Energy, “Fervo Energy Announces Technology Breakthrough in Next-Generation Geothermal,” Fervo Energy, July 18, 2023.
-
“Helion Announces World’s First Fusion PPA with Microsoft.” Helion Energy, n.d. Accessed January 26, 2025.
-
William Wade, “Microsoft to Pay Hefty Price for Three Mile Island Clean Power.” Bloomberg, September 25, 2024.
-
Lisa Martine Jenkins, “Can Google’s clean transition tariff remake utility incentives?” Latitude Media, n.d. Accessed January 26, 2025.
-
Zachary Skidmore, “Meta, QTS Sign Solar PPA with Avangrid, PGE in Oregon.” DataCenterDynamics, n.d. Accessed January 26, 2025.
-
William Wade, “Microsoft to Pay Hefty Price for Three Mile Island Clean Power.
-
See footnote 32.
-
Mitch Green, “SMRs and the Public Sector.” Center for Public Enterprise, n.d. Accessed January 26, 2025.
-
“Amazon Nuclear Small Modular Reactor Net Carbon Zero.” About Amazon, n.d. Accessed January 26, 2025; “Amazon Signs Deals to Invest in Nuclear SMRs to Power Data Centers.” DataCenterDynamics, n.d. Accessed January 26, 2025.
-
"Envusa Energy completes project finance for 520MW of wind and solar projects in South Africa," Anglo American, February 29, 2024; Intergovernmental Forum on Mining, Minerals, Metals, and Sustainable Development (IGF). "Innovation in Mining: ISSD Report 2018," Accessed January 26, 2025.
-
Energy Technologies Area, Lawrence Berkeley National Laboratory. "Repurposing Coal Assets," n.d. Accessed January 26, 2025.
-
“Shippingport Atomic Power Station.” Wikipedia, n.d. Accessed January 26, 2025.
-
“Nuclear Power in the United States.” Wikipedia, n.d. Accessed January 26, 2025.
-
"First new U.S. nuclear reactor in almost two decades set to begin operating,” U.S. Energy Information Administration, June 14, 2016; Akela Lacy, "South Carolina Spent $9 Billion to Dig a Hole in the Ground and Then Fill It Back In," The Intercept, February 6, 2019; Jeff Amy, "Georgia nuclear rebirth arrives 7 years late, $17B over cost," Associated Press, May 25, 2023.
-
U.S. Energy Information Administration. “China continues rapid growth of nuclear power capacity.” Today in Energy, n.d. Accessed January 26, 2025.
-
Measured from start of clinical testing to approval.
-
Gaurav Agrawal et al., “Fast-forward: Will the Speed of COVID-19 Vaccine Development Reset Industry Norms?,” McKinsey & Company, May 13, 2021.
-
The 1706 program at LPO is a $250 billion loan guarantee program to retool, repower, repurpose, or replace energy infrastructure that has ceased operations. The program is subject to a requirement that fossil fuel electricity generation projects must avoid, reduce, utilize, or sequester air pollutants and anthropogenic greenhouse gas emissions, or enable operating energy infrastructure to avoid, reduce, utilize, or sequester air pollutants or anthropogenic emissions of greenhouse gases. U.S. Department of Energy, “Energy Infrastructure Reinvestment,” Loan Programs Office, accessed January 30, 2025.
-
"The Hanford Site." Department of Energy, n.d. Accessed January 26, 2025.
-
Ethan Howland, “DOE proposes easing environmental reviews for certain storage, solar, transmission projects.” Utility Dive, n.d. Accessed January 26, 2025;
-
The Fiscal Responsibility Act. Pub. L. No. 118-5, 137 Stat. 10 (2023).
-
Aidan Mackenzie, “How NEPA Will Tax Clean Energy,” Institute for Progress, July 25, 2024.
-
Lopez, Antonio M.; Comello, Jerome J.; Cleckner, William H. (2004). "Machines, the Military, and Strategic Thought" (PDF). Military Review. Sep/Oct. Fort Leavenworth: US Department of Defense: 71–74. Archived from the original (PDF) on 2007-06-12; Reese Hedberg, Sarah (May 2002). "DART: revolutionizing logistics planning.” IEEE Intelligent Systems. 17 (3): 81–83. doi:10.1109/MIS.2002.1005635. ISSN 1541-1672. Wikidata Q130278658.
-
Cheryl Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End,” U.S. Department of Defense, July 21, 2017; Patrick Ferraris, "Aided Detection on the Future Battlefield." US Army; VADM Robert Sharp, “GEOINT: The Foundation of Intelligence.” 25 April 2022.
-
“Anduril Partners with OpenAI to Advance U.S. Artificial Intelligence Leadership and Protect U.S.” Anduril, n.d. Accessed January 26, 2025.
-
“Defense Llama: The LLM Purpose-Built for American National Security.” Scale (blog), n.d. Accessed January 26, 2025.
-
“Palantir AIP for Defense.” Palantir, n.d. Accessed January 26, 2025.
-
50 U.S.C. § 4532.
-
“Appendix II - Sample TIA, Defense Production Act Title III Expansion of Domestic Production Capability and Capacity - Funding Opportunity Announcement (FOA).” SAM.gov, n.d. Accessed January 26, 2025.
-
U.S. Department of Energy, “BIL – Preventing Outages and Enhancing the Resilience of the Electric Grid Formula Grants to States and Indian Tribes,” DE-FOA-0002736, April 17, 2024.
-
U.S. Department of Energy, “Smart Grid Investment Grant Program Final Report: Executive Summary,” December 2016.
-
U.S. Department of Commerce, “Frequently Asked Questions: Preventing Improper Use of CHIPS Act Funding,” National Institute of Standards and Technology, April 24, 2024.
-
U.S. Department of Commerce, “CHIPS Technology Protection Guidebook,” National Institute of Standards and Technology, March 11, 2024; U.S. Department of Commerce, “CHIPS Incentives Program: Approach to National Security,” National Institute of Standards and Technology, September 19, 2023.
-
U.S. Department of Commerce, “Frequently Asked Questions: Preventing Improper Use of CHIPS Act Funding,” National Institute of Standards and Technology, April 24, 2024.
-
For useful discussion on exemptions from procedural environmental reviews, see Thomas Hochman, “So You Want to Ignore an Environmental Law,” Green Tape Blog. Substack. Accessed January 25, 2025.
-
Arnab Datta and James Coleman, “We Must End the Litigation Doom Loop,” Institute for Progress, May 29, 2024.
-
Dodge, Joel, Joel Michaels, Lenore Palladino, and Todd N. Tucker. "Progressive Preemption: How the Defense Production Act Can Override Corporate Extraction, Boost Worker Power, and Expedite the Clean Energy Transition." Roosevelt Institute, December 2022.
-
Brian Potter, Arnab Datta, and Alec Stapp, “How to Stop Environmental Review from Harming the Environment,” Institute for Progress, September 13, 2022.
-
While data is limited, a 2014 GAO report shows that while EISs took an average of 4.6 years to complete in 2012, categorical exclusions typically take just 1-2 days at DOE and the Office of Surface Mining. The Forest Service averaged 177 days for categorical exclusions versus 565 days for Environmental Assessments (EAs). Cost-wise, while EISs cost between $250,000-$2,000,000 and EAs between $5,000-$200,000, categorical exclusions are described as costing "much lower" than EAs, though specific cost data for categorical exclusions was not tracked by the agencies studied. DOE data showed EAs had median contractor costs of $65,000 versus $1,400,000 for EISs. U.S. GOV'T ACCOUNTABILITY OFF., GAO-14-369, "National Environmental Policy Act: Little Information Exists on NEPA Analyses (2014)"
-
7 CFR 1970.53(a).
-
42 CFR § 46.210(c).
-
Cybersecurity Maturity Model Certification (CMMC) Program, 89 Fed. Reg. 83,092 (Oct. 15, 2024) (to be codified at 32 C.F.R. pt. 170).
-
U.S. Department of Homeland Security, “Zero Trust Maturity Model Version 2.0,” Cybersecurity and Infrastructure Security Agency, April 2023.
-
FedRAMP, “Understanding Baselines and Impact Levels,” Federal Risk and Authorization Management Program, November 16, 2017.
-
National Institute of Standards and Technology, “Security and Privacy Controls for Information Systems and Organizations,” Special Publication 800-53 Revision 5, December 10, 2020.
-
Murugiah Souppaya, Karen Scarfone, and Donna Dodson, “Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities,” National Institute of Standards and Technology, February 2022.
-
Harold Booth et al., “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile,” National Institute of Standards and Technology, July 2024.
-
Ron Ross and Victoria Pillitteri, “Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations,” National Institute of Standards and Technology, May 2024.
-
National Institute of Standards and Technology, “Security Requirements for Cryptographic Modules,” Federal Information Processing Standards Publication 140-3, March 22, 2019.
-
Frontier Model Forum, “Introducing the Frontier Model Forum,” July 26, 2023.
-
National Institute of Standards and Technology, “Security Requirements for Cryptographic Modules,” Federal Information Processing Standards Publication 140-3, March 22, 2019.