Executive summary
The United States leads the world in AI, but that leadership is hitting a hard constraint: the availability of electricity. We face a growing power shortfall, but even where power is technically available, some AI data centers face years of delay before they can be powered on.
One part of the problem is “interconnection:” the regulatory process governing how both generators of electricity and users of electricity (also known as “load”) connect to the grid. Unlike generator interconnection, which was the subject of important reforms in 2023, load interconnection remains governed by a patchwork of inconsistent utility practices that are buckling under unprecedented demand.
We recently responded to a request for comments from the Federal Energy Regulatory Commission (“FERC” or “Commission”) on an Advanced Notice of Proposed Rulemaking (henceforth “ANOPR” or “Proposed Rulemaking”) on large load interconnection. This report provides context on the problem of large load interconnection and summarizes the comments we provided to FERC. We cover five main topics:
- Electricity policy matters for American AI leadership. The United States has the majority of the world’s supercomputers and the most advanced AI companies, contributing to hundreds of billions of dollars in domestic capital investment. These assets are enabling scientific breakthroughs and are important to national security. But without sufficient electricity to keep the next generation of AI R&D in America, we risk ceding this leadership to nations with the electricity to power it.
- Slow interconnection is a barrier to bringing new AI infrastructure online. The electrical grid is a precisely calibrated system where supply and demand must balance instantaneously. Before any new facility — whether a power plant or a large electricity user like an AI data center — can connect, engineers must study how that connection will affect grid stability under various stress scenarios. These studies determine what infrastructure upgrades (transformers, substations, transmission lines) are needed and who pays for them. This process is called “interconnection.” When it works well, it ensures reliability. When it does not, it becomes a years-long bottleneck that can delay or kill projects regardless of their merit.
- The interconnection problem is familiar. The same dysfunction that paralyzed electricity generator interconnection queues — speculative “phantom” projects, serial study processes, withdrawal cascades, and cost allocation disputes — is now afflicting load interconnection. FERC’s Order 2023 reforms began clearing the generator backlog; large loads need similar treatment.
- The solutions proposed by FERC are proven. Many of the principles described in the Proposed Rulemaking seek to adapt the Order 2023 framework to large loads, with adjustments: mandatory deposits to filter speculative projects, unified study processes for co-located generation and load, cost certainty through participant funding, and the option for developers to build their own interconnection facilities. These reforms would help provide the regulatory certainty developers need to finance the next generation of AI infrastructure.
- Demand flexibility is particularly powerful. The Proposed Rulemaking’s seventh Principle — accelerated pathway for curtailable loads — could transform interconnection timelines. A recent study shows that flexible connections can allow projects to come online three to five years faster than traditional interconnection, and with considerably less investment in generation and transmission. This “speed-to-power” pathway offers a three-way win: preserving US AI leadership, reducing costs for existing customers, and enhancing grid reliability by adding flexible rather than inflexible demand.
- The proposal falls short on security. AI data centers are not merely large electricity consumers; they are increasingly central to the economy, scientific discovery, and national defense — and this makes them high-value targets. Moreover, unlike traditional industrial loads, they are fully networked digital assets controlling gigawatts of power through software. If compromised, they could destabilize the grid. Principle 14’s mere “review” of security is insufficient. We urge FERC to establish a “Large Load Data Center Operator” registration category with robust cyber and physical security standards — and to condition expedited interconnection on adherence to these standards. This bargain would condition speed on security, benefiting both developers and the larger US electricity grid and those that rely on it.
FERC’s Proposed Rulemaking would help fix the broken interconnection process and unlock dormant grid capacity through demand flexibility. By also conditioning accelerated access on hardened security, FERC can help secure America’s AI leadership, reduce costs for ratepayers, and strengthen critical infrastructure against emerging threats.
Introduction
America’s ability to remain at the forefront of AI R&D is at risk due to a lack of available electricity. Part of the solution lies in reforming “interconnection” — the rules governing how new facilities are safely connected to the electrical grid. We recently responded to a request for comments by the Federal Energy Regulatory Commission (“FERC” or “Commission”) on a proposal to reform interconnection of “large loads” — major electricity users such as AI data centers. This article provides an introduction to the problem of large load interconnection and summarizes our comments on FERC’s Advanced Notice of Proposed Rulemaking (“ANOPR” or “Proposed Rulemaking”).
AI leadership & electricity
The United States is the world leader in AI. The majority of the world’s supercomputers, and the companies with the most advanced models are in the US. Domestic AI firms’ domestic capital investments have reached hundreds of billions of dollars. AI has already resulted in Nobel Prize-winning research and is key for future scientific progress and innovation. AI is also increasingly critical to national security. Maintaining AI leadership allows the US to better align this powerful technology with US interests and values.
But as discussed in our Compute In America series, our ability to remain at the forefront of AI R&D is hitting the limits of our country’s electricity system. Projections indicate that to maintain the majority of AI data centers domestically, US power demand may need to grow by over 100 gigawatts by 2030, and that new capacity has to be efficiently connected to the facilities that need it. However, today, the interconnection process can take years. Without reform, we risk ceding AI leadership — and the economic, scientific, and security advantages that come with it — to nations with more coordinated industrial policies.
Problems with interconnection policy
The electricity system — sometimes called “the largest machine in the world” — is extremely complex, consisting of thousands of power plants, half a million miles of high-voltage transmission lines, and a further five million miles of low-voltage distribution lines.
The grid relies on alternating electrical current (AC) that must be precisely calibrated to 60 Hertz, system-wide. Even a slight deviation in one area can ripple outward, causing equipment damage or cascading outages. And only so much electricity can flow over transmission lines, substations, and transformers due to physical limits; pushing too much power through a line causes it to heat up, sag, and eventually fail.
To make sure new projects — both generation and load — do not overrun these limits, regulators have required both users and generators of electricity to participate in complex, time-consuming studies before connecting to the grid. In these studies, engineers model the grid under various stress scenarios to determine if the new connection requires additional infrastructure investment, such as new transmission lines or substations.
Historically, interconnection policy was the domain of state and local utility commissions. However, FERC determined that transmission owners were using inconsistent, case-by-case interconnection practices to discriminate against independent power producers — using their control over the grid to impose arbitrary delays and costs on competitors, effectively favoring their own generation assets. This prompted FERC to issue Order No. 2003, preempting state control of generator interconnection, which it found was a “critical component of open access transmission service” and thus under federal jurisdiction.1 The Order established a single, standardized set of procedures for large generators (greater than 20 MW) to connect to the grid and established a standard contract and timeline for interconnection studies.
FERC Order No. 2023
Between 2003 and 2023, the interconnection system’s design became unworkable. Order No. 2003 established a “first-come, first-served” serial process, where engineers studied projects one by one. This created a perverse incentive: because study results were the only way to determine interconnection costs, developers flooded the queue with speculative “phantom” projects to fish for the most advantageous location. When one of these speculative projects inevitably withdrew, it triggered a “withdrawal cascade,” forcing utilities to restudy all subsequent projects and creating endless loops of delay. By the early 2020s, this inefficiency had caused the queue to swell to over 2,000 gigawatts — more than the installed generating capacity of the entire country — paralyzing new generation deployment.
Recognizing that the problems described above threatened the reliability of the bulk power system, FERC issued Order No. 2023, followed by a clarifying order, 2023-A, in early 2024. These orders represented the most significant overhaul of interconnection rules in decades, changing the interconnection rules for generators of electricity, but leaving the rules for users of electricity the same.
The orders introduced three primary mechanisms to address the interconnection problems discussed above:
- Cluster Studies: Instead of studying projects one by one (serial processing), providers must now study them in annual groups (clusters). This allows engineers to identify network upgrades that benefit multiple projects simultaneously and split the costs among them.
- Stricter Financial Readiness: To enter a cluster, developers must pay study deposits, demonstrate they own or rent the property at issue in the proposed development, and pay withdrawal penalties.
- Firm Deadlines for Utilities: Historically, utilities were only required to use “reasonable efforts” to complete studies on time — a standard so vague it was effectively unenforceable. Order 2023 replaces this with firm deadlines and financial penalties for transmission providers that fail to complete studies on schedule.
Since the order’s passage, there are signs it is working, despite some adaptation costs. As developers anticipated the stricter barriers to entry, they flooded queues in late 2023 to be grandfathered in under the old, looser rules. Consequently, Lawrence Berkeley National Laboratory (LBNL) reported that the total capacity in US interconnection queues actually grew by nearly 30% in 2023, reaching almost 2,600 gigawatts.
However, since 2023 there has been a contraction of the interconnection queue for the first time in decades. According to LBNL data, total active queue capacity dropped from a peak of ~2,600 GW in 2023 to ~2,300 GW in 2024. This is unlikely to be because of less demand for new power generation. Rather, the new rules forced speculative projects out of the queue, evidence of the policy working.
Large loads face familiar interconnection problems
Today, the interconnection process for large loads is facing the same problems that the process for generators did in the last decade. Because there is no standardized federal framework for these loads, utilities are managing them inconsistently, often with the same outdated serial processes that failed for generators. As a result, problems that once affected the generation queue now threaten the future of American AI infrastructure:
- Inefficient Study Processes: While many utilities are moving toward “cluster studies” (assessing groups of projects together), many projects are still studied serially. If one project withdraws from the queue, the utility has to revisit its assumptions and potentially re-study all subsequent projects, leading to significant delay.
- Cost Allocation Disputes: Complex frameworks set out who pays for upgrades. In many jurisdictions, the “participant funding” model dictates that the project that first triggers an upgrade must pay for the full cost of the upgrade (even if other grid users benefit).
- Queue Clogging: Because the costs of interconnection are uncertain until the study is complete, there is an incentive for speculative projects to enter the queue. Developers flood the system with proposals to “fish” for a site with low upgrade costs, intending to withdraw the others. This volume overwhelms utility planning departments and slows down viable projects.
- Co-location inflexibility: Many jurisdictions do not provide for unified studies of co-located generation and load. Despite the efficiency of placing an electricity user and generator directly next to each other — reducing the need for transmission — regulations often treat them as separate entities requiring separate, disjointed studies.
- Monopolies on utility construction of updates: Local regulations often prevent large loads from constructing their own network upgrades, even when local utilities have years-long queues.
Proposed reforms for large load interconnection
To address these challenges with large load interconnection, the Proposed Rulemaking’s Principles outline a suite of reforms modeled directly on Order No. 2023 (which applied only to generators). The core philosophy is to shift the grid from a “first-come, first-served” system to a model that prioritizes viable projects. By increasing financial barriers to entry and standardizing study procedures, the Commission aims to clear the queue of “phantom” requests and create a transparent, fast-track for developers who are actually ready to build. Key proposals include:
- Deposits (Principle 4): deposits to weed out speculative projects, and to ensure that study resources are focused only on viable, large-scale projects.
- Co-location (Principles 3 & 5): allowing hybrid facilities (load paired with generation) to be studied based on net impact, substantially reducing the need for grid upgrades and leading to more efficient studies.
- Cost Certainty (Principle 8): having large loads fund 100% of their assigned network upgrades. While this adds a burden to developers, it provides the predictable cost structure necessary to finance multi-billion-dollar projects, replacing the opaque and often socialized costs of the current system.
- Option to Build (Principle 9): Allowing developers to build their own interconnection facilities (like substations) rather than waiting in utility construction queues.
We strongly endorse these structural reforms as a necessary baseline for fixing the queue and helping the US remain a leader in AI. By adapting the framework that began clearing the generator backlog in Order No. 2023, FERC can filter out the speculative requests that currently paralyze utility planning, providing the regulatory certainty developers need to finance billion-dollar AI infrastructure.
Flexible electricity demand
While the reforms above seek to optimize the interconnection process in a manner akin to what FERC did with generators, Principle 7 proposes a significant change to that process that is unique to users of electricity, with significant implications for the grid.
Electric grids are designed around peak demand — for instance, the single hottest day of the year when everyone runs their air conditioning, or the coldest evening when heating demand surges. These peaks might occur for just a few hours annually, yet we build and maintain generation capacity to serve them since to do otherwise risks blackouts or other system failures if demand exceeds supply. But as the graph below illustrates, on average days there is considerable capacity “in reserve.”
Historically, grid operators and regulators assume every new load requires “firm” service — meaning the grid must be built large enough to support that facility’s full demand at all times, even during the most extreme peak conditions. Principle 7 rethinks this default assumption, and the implications if implemented are substantial.
Principle 7 proposes that large loads willing to be flexible or “curtailable” — meaning they agree to ramp down power usage during peak grid stress — should receive expedited interconnection studies, potentially compressing timelines to as little as 60 days. This is the most powerful tool in the Proposed Rulemaking for addressing the mismatch between surging AI demand and grid constraints discussed above. By prioritizing projects that can be flexible, FERC creates a “speed-to-power” pathway that allows data centers to come online years faster than the traditional queue would allow, provided they agree to use less power when the grid is most vulnerable.
A recent Camus Energy study provides the first publicly available analysis combining real utility transmission data, system-level capacity modeling, and site-level optimization to evaluate flexible interconnection. The researchers modeled six real candidate sites within a PJM utility’s territory and found notable results:
- Speed gains: A 500 MW data center using flexible grid connections plus “bring-your-own capacity” (BYOC) arrangements can reach full operation in roughly two years, three to five years faster than traditional interconnection.
- Minimal curtailment: Grid power was available for more than 99% of all hours. Only 40-70 hours per year of on-site energy would be needed.
- Internalized costs: Each gigawatt of new data center demand adds $764 million in system supply costs under traditional firm interconnection. Flexible data centers that bring their own capacity, however, can cover nearly 100% of their incremental system costs, thus protecting existing ratepayers.
These findings align with earlier research from Duke University’s Nicholas Institute, which estimated that with average annual curtailment of just 0.25% (less than 22 hours per year), the US grid could integrate 76 GW of new load with minimal expansion. This new study now provides granular, site-specific validation of that potential that could be unlocked with the proposed rulemaking.
A three-way win
Accelerated interconnection for curtailable loads has the potential to help secure America’s AI leadership as well as help existing customers and grid reliability simultaneously:
- US AI leadership: Principle 7 helps alleviate the problems the US has in providing power to AI infrastructure by offering a “speed-to-power” fast lane. Instead of waiting years for new transmission lines to be built, flexible data centers can connect immediately using existing grid reserve capacity. This helps ensure that the next generation of gigawatt-scale AI clusters is built in the US, keeping the economic engine, scientific innovation, and national security assets of the future under US jurisdiction.
- Existing Customers: The Camus study demonstrates that flexibility doesn’t just speed interconnection, it can also make an overall more efficient system. Flexible connections reduce the generation capacity the system must build, while co-located generation ensures the data center, not other ratepayers, funds the capacity needed to serve its firm load. By eliminating or deferring multi-billion-dollar investments in both generation and transmission, this approach can reduce rates for existing customers rather than increasing them.
- Grid Reliability: Demand flexibility directly addresses resource adequacy shortfalls that are threatening the grid. Rather than adding inflexible demand that exacerbates peak challenges, flexible loads can actually help balance the system. The Camus study found that utilities gain additional demand-side resources to alleviate system stress, turning what could be a reliability liability into an asset.
Building new baseload generation takes years. Large transmission projects can take a decade. But load flexibility is available now — limited only by the regulatory framework to enable it. Principle 7 is thus the most powerful near-term tool in the ANOPR for addressing the collision between surging AI demand and grid constraints.
The case for security: new loads, new threats
While we strongly support the Proposed Rulemaking’s interconnection reform and focus on “speed-to-power,” the proposal falls short on the critical issue of grid security. AI data centers are not merely large electricity consumers — they are increasingly central to the American economy and scientific enterprise. AI systems now contribute to drug discovery, materials science, and defense applications. This centrality will make them increasingly high-value targets. And unlike traditional industrial loads such as aluminum smelters or paper mills, AI data centers are fully networked digital assets that control gigawatts of power through complex software layers. If compromised, they could be manipulated to destabilize the entire grid. Currently, Principle 14 of the ANOPR suggests a “review” of security standards for these facilities. Given the stakes, we believe this is insufficient. We urged the Commission to establish a new registration category — like “Large Load Data Center Operator” — and direct North American Electric Reliability Corporation (NERC) to develop robust cyber and physical security standards specifically for these facilities.
Additionally, we propose conditioning the expedited interconnection pathway on adherence to these stricter standards. If a developer wants the immense value of jumping the queue through flexibility, they should demonstrate they are a hardened asset that contributes to grid security, rather than a vulnerability that threatens it.
Grid risks are here now
The threat isn’t hypothetical. A recent NERC white paper, “Characteristics and Risks of Emerging Large Loads,” provides empirical evidence for threats to the bulk power system posed by AI data centers.2 The paper shows how sophisticated state actors may seek to sabotage AI infrastructure as AI systems become integral to cyber operations. Scenarios include a “Flash Load” attack, where an adversary compromises a facility’s load management system to repeatedly initiate and halt gigawatt-scale jobs, or a using the data center’s own protective settings (Low Voltage Ride Through or “LVRT”), triggering massive, simultaneous load trips and destabilizing the grid. This is not theoretical: in July 2024, a transmission fault caused the simultaneous loss of ~1,500 MW of load from data centers, producing a dangerous frequency spike.
As AI facilities become integral to both the American economy and defense industrial base, they present an increasingly attractive target for nation-state adversaries. Maintaining AI leadership requires not merely building infrastructure, but actively bolstering data center security against adversaries.
We are urging the Commission to go further than a simple review identified in Principle 14. We propose that FERC’s new rules provide for a new registration category and a requirement that links speed to security. Specifically, FERC should direct NERC to develop new or modified Reliability Standards, including Critical Infrastructure Protection (CIP) standards that apply directly to this new registered entity category.
NERC’s Large Loads Task Force has already completed its first phase: identifying the risks, including cyber risks. The task force’s planned next steps include assessment of gaps in existing practices, requirements, and reliability standards, and development of reliability guidelines for risk mitigation.
What happens next
The comment period for the Proposed Rules closes in early December 2025. Following a review of these comments, FERC’s next step will be to decide whether or not to issue a Notice of Proposed Rulemaking (NOPR). Unlike the high-level “principles” in the current ANOPR, the NOPR will contain the actual draft text of the new regulations. This will be our chance to see the specific legal definitions FERC intends to use — for example, how exactly it defines “curtailable load” and what specific security mandates (if any) make it into the draft rules.
The Department of Energy has requested that FERC take final action by April 30, 2026. This is an ambitious timeline for a major new regulation; for context, similar rulemaking efforts in the past have often taken years to finalize.
We will be watching closely to see if the Commission can meet this aggressive deadline. Regardless of the speed, the issuance of the NOPR will trigger another round of public comments, giving stakeholders one final opportunity to shape the rules before they become law.
Conclusion
By finalizing rules that standardize the load interconnection process and unlock the dormant capacity of our grid through flexibility, we can secure America’s lead in the technology that will define the coming century. However, we cannot allow this technology to undermine the power system on which it and our economy more generally depend. By adopting accelerated access in exchange for hardened assets, FERC can ensure that the US grid remains not only the engine of our economy but a bulwark of our national security.
Read our full filing here.
-
Under the Federal Power Act (FPA), FERC has exclusive jurisdiction over the "transmission of electric energy in interstate commerce." 16 U.S.C. § 824.
-
North American Electric Reliability Corporation. Characteristics and Risks of Emerging Large Loads. Large Loads Task Force White Paper. July 2025. https://www.nerc.com/globalassets/who-we-are/standing-committees/rstc/whitepaper-characteristics-and-risks-of-emerging-large-loads.pdf.