Practical Ways to Modernize the Schedule A List
Executive summary
The Department of Labor’s Schedule A list streamlines the application process for an employment-based green card in occupations with excess labor demand. It has not been updated in 33 years. In October 2023, the Department of Labor (DOL) opened a Request for Information (RFI) about how they could modernize Schedule A using an objective, transparent, and reliable method.
Key findings:
- A majority of the substantive comments recommend updating the Schedule A list. Most of these comments cite the value of Schedule A as a tool to address U.S. labor shortages and provide greater certainty and predictability to immigrants, their families, and their employers. Commenters also say they would value Schedule A as a source of vital information about labor gaps in the United States. This information would both improve the employment-based green card system and allow the government to develop more effective workforce training and reskilling programs.
- Commenters identified 38 categories of economic indicators that could be used to make determinations about the occupations on Schedule A, and provided useful information about the associated costs and benefits of each indicator. Some indicators, like wage trends, are supported by both employer and worker organizations. Others, like vacancy postings, do not have broad stakeholder support.
- Commenters suggested new worker protections that could be paired with a Schedule A update. They included requiring Schedule A employers not to include “stay-or-pay” provisions in contracts that tie workers to their employer, or requiring employers to attest they have been in compliance with permanent labor certification (PERM) regulations for five years.
Introduction
A simple internet search will return dozens of news articles saying the U.S. economy is experiencing intractable labor shortages, in occupations ranging from construction to hospitality to healthcare to research.[ref 1] But how can a layperson (or the federal government) distinguish legitimate labor shortages from situations where wages are simply too low to attract more workers? How can we determine which industries are experiencing the worst, longest-lasting, or damaging labor shortages?
Readers may be surprised that there is no federal method for evaluating labor shortage based on economic data. Federal and state agencies spend millions collecting data about the health of the workforce and the economy, but offer no transparent method identifying categories of employment that need more workers. This opaqueness has ramifications far beyond just research. Students have difficulty determining which fields to study to land good jobs. Schools do not know what skills their graduates need to be competitive in the job market. Companies, states, and the federal government do not know how to maximize the benefits of limited workforce training resources.
DOL[ref 2] has a neglected mechanism that could be used to identify areas of severe labor shortage, called Schedule A, but it has not been updated in decades. However, for the first time in over thirty years, DOL is considering a Schedule A modernization. In late 2023, it issued an RFI asking the public for input. By the time the comment period closed in mid-May 2024, DOL received over 2,000 comments with suggestions, criticisms, and new ideas.
An RFI is a tool available to a federal department or agency in advance of notice and comment rulemaking, facilitating the ability to collect distributed knowledge among stakeholders across the country. The Administrative Conference of the United States has recommended U.S. agencies take such steps to enhance public engagement in rulemaking to avoid regulatory proposals that do not first assemble the necessary, comprehensive information needed to tackle complex problems.[ref 3] Such recommendations are intended to better ensure agencies obtain “situated knowledge” as part of rulemaking efforts – an acknowledgement that public officials benefit from having access to knowledge that is widely dispersed among stakeholders.[ref 4] “In particular, agencies need information from the industries they regulate, other experts, and citizens with situated knowledge of the field in order to understand the problems they seek to address, the potential regulatory solutions, their attendant costs, and the likelihood of achieving satisfactory compliance.”[ref 5] When departments and agencies issue an RFI, they then thoroughly analyze comments filed to make decisions on how to proceed.
This paper aims to analyze the comments submitted to DOL about Schedule A, discuss the concerns raised, and present the options DOL has to develop a reliable, data-driven methodology to assess labor shortage and finally issue an update to Schedule A. It assesses the substantive comments, discusses their recommendations for data sources, economic indicators, and methods, and identifies the pros and cons of each suggestion.
Developing a reliable methodology for labor shortages has long been thought too difficult an undertaking. This assessment of RFI comments suggests a path forward.
Overview of Schedule A
Starting with the Immigration and Nationality Act of 1965, Congress gave the Secretary of Labor the responsibility to certify that before a foreign worker is hired by a U.S. employer, “there are not sufficient workers in the United States who are able, willing, qualified, and available at the time of application for a visa and admission to the United States and at the place to which the [foreign worker] is destined to perform such skilled or unskilled labor, and the employment of such [foreign workers] will not adversely affect the wages and working conditions of the workers in the United States similarly employed.”[ref 6] Shortly after the passage of this act, DOL realized that approving individual labor certifications was highly time-consuming. To combat processing delays of work visas, the Secretary of Labor developed Schedules.[ref 7] Schedule A pre-certified categories of employment where there were not enough U.S. workers ready, willing, and able.[ref 8]
Schedule A has been updated multiple times since 1965. It included categories of employment of critical importance to the United States at the time, including people with degrees in engineering, physics, chemistry, and pharmacology, among others.[ref 9] The final substantial update to the Schedule A list pre-certifying categories of employment was in 1991.[ref 10] Since then, it has included only three categories: physical therapists, nurses, and a poorly understood category of foreign workers who possess “exceptional ability in the sciences, arts, or performing arts.”[ref 11] According to DOL, the agency used the workforce data it collected, stakeholder feedback, and (for a few years) data from the Department of Health and Human Services (HHS) to determine which categories of employment should be included in Schedule A.[ref 12] The exact method that DOL used to make these determinations is not publicly available. In the last 33 years, the nature of work, federal data collection, and research about labor shortage has evolved significantly.
Recent federal actions on Schedule A
In recent years, there has been growing momentum for DOL to modernize its method for Schedule A pre-certification, including recommendations from the House Select Committee on China and the National Academies of Sciences.[ref 13] This momentum culminated in the passage of President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The Administration urged DOL to examine ways to modernize Schedule A to attract and retain talented workers as leadership in AI and other critical and emerging technologies becomes increasingly important internationally.[ref 14] 52 days later, DOL issued an RFI seeking to understand the labor market in science, technology, engineering, and mathematics (STEM) and non-STEM fields, and if Schedule A could be a tool to satisfy protracted demand.[ref 15] DOL notes that, “In the 1960s and 1970s, Schedule A was the product of an extensive process of economic and labor market analysis of employment demand and supply by the Department. Schedule A occupations were later identified through the application of multiple factors, including unemployment rates; occupational projections; evidence submitted by trade associations, employers, and organized labor; and technical reviews by federal and state staff with expertise in these areas. The occupational listings in the Schedule were reviewed and modified at regular intervals to reflect changing economic and labor market conditions and to prevent adverse effects on the wages or working conditions of U.S. workers.” However, since the Schedule A list has not been updated in over 30 years, DOL does not have comprehensive data on how employers use Schedule A designations or whether it should be expanded.[ref 16]
DOL asked questions to understand whether any occupations should be added to Schedule A and why, what sources of data and methods should be used to determine labor shortage, how DOL could develop its own methodology, and how to define occupations in STEM. The full list of questions can be found in Appendix B. In subsequent sections, this paper will provide an overview and analysis of the comments DOL received in response to their RFI. All comments are publicly available on registration.gov.[ref 17]
Summary of RFI results
Before DOL closed the RFI’s comment period on May 13, 2024, the agency received 2,036 public comments. 151 of those comments provided sources or substantive arguments for their opinions on the modernization of Schedule A. Of those comments, 53% support and 47% oppose updating the list.
These commenters come from a variety of sectors, including healthcare, scientific research, government, and labor. 96% of the total number of comments were submitted either anonymously or by an individual. The rest of the comments were from companies (33), professional or industry associations (23), nonprofit organizations (20), think tanks (8), governmental organizations (6), and labor unions or workers’ rights organizations (6). 18% of the comments included information that identified the commenters as being in the information technology industry. 6% of respondents were from the scientific research sector. 3% were from the policy sector. Only 1% of respondents were in healthcare. Unfortunately, 71% of all the submitters either did not mention which industries of which they were a part or were not in the industries specifically examined in this analysis (academia, agriculture, construction, defense, finance, healthcare, hospitality, information technology, policy/government, or science).
Those in support of modernizing Schedule A suggested a wide range of occupational categories to be added, such as engineering, robotics, quantum computing, biomedical research, nursing, emergency medicine, K-12 education, manufacturing, agriculture, and construction. One commenter even suggested that any occupational categories that have a PERM approval rate of 98% or higher should be added. Commenters suggested DOL use several sources of data for a Schedule A update, including both public and private sources at the national, regional, and local levels. Commenters recommended ways DOL can improve its own data collection or partner with other agencies to increase the granularity of federal data.
Concerns about the treatment of foreign nurses who have used Schedule A to come to the United States were a frequent topic of discussion and produced recommendations about how DOL can reduce fraud and abuse in the employment-based visa system. The RFI also produced a robust list of economic indicators DOL should use if it undertakes the development of a methodology to update Schedule A. These indicators ranged from ways to measure job openings, unemployment rates, and wages to the working conditions, training opportunities, and demographics of the current workforce. Beyond just updating the Schedule A list, commenters also had recommendations about improving the process to obtain a prevailing wage determination, expanding special handling, and creating an outside consultative body of experts who would analyze labor shortage questions in the future.
In order to best consider and understand the numerous substantive comments filed in response to the RFI, we discuss comments on possible Schedule A revisions in the following groupings:
- Should DOL modernize Schedule A?
- Recommended methodologies to update Schedule A
- Recommended worker protections
- Recommended data sources
- Defining STEM
Should DOL modernize Schedule A?
In this section we describe commenters’ feedback as to whether DOL should modernize Schedule A. First, we discuss criticisms of Schedule A modernization. We then discuss support for Schedule A modernization. Lastly, we briefly outline the occupational categories that commenters specifically recommended should be added to Schedule A.
Arguments against modernizing Schedule A
While a majority of substantive comments supported an update to Schedule A, a significant minority were skeptical of the need for an update. In this section, we distinguish between two categories of arguments deployed by those skeptical of a Schedule A update: a) shortage-denying arguments and b) methodology critiques. As we discuss, we think both contribute important insights for DOL and should inform the methodology they eventually adopt.
The first category of criticism is shortage-denying arguments. These arguments presume that we can use existing sources of evidence to determine whether labor shortage conditions prevail, but assert that the evidence would not support the existence of labor shortage. By presuming that we have sufficient evidence to make such judgments, this kind of argument is consistent with the Department adopting a transparent, evidence-based, and outcome-neutral methodology to identify labor shortage. Indeed, many of the comments that fall into this category provide useful information about what that methodology could look like.
Many shortage-denying commenters expressed concerns that a Schedule A update will be captured by self-interested firms that peddle false claims of a shortage. It is precisely for this reason that a transparent, objective, and reliable process will be critical: to evaluate which claims are legitimate and which are not. Many of the comments denying specific claimed shortages identify precisely the categories of evidence and even the specific indicators that DOL should be looking at to make determinations across all occupations.
For example, the Center for Immigration Studies (CIS) suggests that “DOL should base any determination of a ‘labor shortage’ on wage trends,” in conjunction with “the number of working-age Americans who possess relevant degrees and the share of such degree holders who currently work in the pertinent occupations.”[ref 18] The Economic Policy Institute (EPI) echoes some of CIS’s recommendations, concluding that “rising wages and the wages offered in an occupation are a key — if not the primary — indicator of whether a shortage exists in an occupation.”[ref 19] The view that wage trends are critically important is also shared by the International Federation of Professional and Technical Engineers (IFPTE), which points to wage growth and lagged wage growth, in both nominal and real terms, as compelling evidence.[ref 20] Given essentially universal support for the informative value of wage trends, we suggest these be the core of any methodology DOL adopts.
AFL-CIO’s comment suggests using the layoff rate as a negative indicator of labor shortage conditions, which also seems quite reasonable.[ref 21] In any attempt to measure this, care should be taken not to conflate quits or job-to-job transitions (i.e., workers quitting to find better jobs) with layoffs. EPI also suggests enrollment rates among U.S. citizens and legal permanent residents (LPRs) in STEM field degree programs.[ref 22] On this last point, we caution that while supply of STEM workers has increased, demand has also increased over the period. The question for DOL must always be if demand rises faster than supply can increase to meet it — and how the market fares during the long training period.
At the same time, many commenters warn against heavy reliance on indicators like job openings or the average number of days positions are open, as these may simply reflect bad offers with low pay or poor conditions. CIS also cautions that DOL should not infer too much from low unemployment rates, which may mask people leaving the labor force altogether if they cannot find a job.[ref 23] This caution is shared by the AFL-CIO, which notes that relatively lower unemployment is consistent with STEM graduates having education and skills that let them get work in other fields, rather than a shortage of STEM jobs.[ref 24]
Other commenters similarly skeptical that labor shortages exist suggest using unemployment, but with caveats. EPI and the Institute for Sound Public Policy (IfSPP) both argue that unemployment on its own is a misleading indicator, and must be compared to historical within-occupation unemployment rather than across-occupation unemployment.[ref 25] In its comment, EPI points out that unemployment rates associated with full employment (and recessions) vary significantly across occupations. They therefore argue that DOL should use occupation-specific full employment rates (i.e., the rate of unemployment when the aggregate economy is in full employment) as a baseline for comparison.
We agree that comparing the economy-wide full employment rate to every occupation may overstate the extent of shortage conditions. In Help Wanted, we suggest choosing a threshold for inclusion in Schedule A by targeting the number of workers in occupations with less than 1.8% unemployment (well below than the 2.8% IfSPP identifies for computer workers, or the 2% EPI suggests).[ref 26] But the business cycle affects occupations differently, and full employment across the aggregate economy does not necessarily imply there is full employment in every occupation. Therefore, we cannot deduce that an occupation is at full employment if and only if it has lower unemployment than it does when the aggregate economy is at full employment. If an occupation has high barriers to entry, in an industry unexposed or anticorrelated with the business cycle, shortage conditions may persist even during a recession. Meanwhile, another occupation may not face shortage conditions even when the aggregate economy is at full employment. Put another way, if conditions in the aggregate economy simply coincided with occupation-specific conditions, then occupational characteristics would not be needed.
In short, arguments that deny the existence of shortages share implicit assumptions with arguments that broadly indicator-based approaches should be used. Both arguments often provide important recommendations about which indicators DOL should find reliable. These arguments will be useful for DOL to study in constructing a credible, evidence-based methodology.
Schedule A skeptics like the AFL-CIO, EPI, and CIS generally agreed that wage trends are the most important and reliable indicator of shortage conditions, while vacancies and general employment trends are less reliable. We recommend that DOL adopt a method that places significantly more weight on the indicators that Schedule A-skeptical commenters find convincing, and places less weight on indicators skeptics suggest are unreliable.
The second category of arguments consists of methodology critiques. These arguments doubt the feasibility and reliability of particular methodologies that DOL may be considering. Some of these critiques point to important drawbacks of certain methodologies that can simply be avoided. Others point to serious tradeoffs that DOL should carefully weigh and that we believe can ultimately be addressed. We urge DOL not to consider the mere existence of drawbacks to a proposal in isolation, or to compare proposals against imagined perfection. Rather drawbacks must be compared against the drawbacks of the opaque status quo.
For example, EPI poses four methodological critiques that it believes support inaction on Schedule A. EPI identifies four weaknesses with an indicator-based methodology of the sort used by other developed countries: lags in data, lack of robustness, unharmonized data, and the importance of “bottom-up” data.[ref 27]
EPI’s comment states that “data available to the Department for determining a shortage are far too old and out of date to credibly establish a current and/or future shortage.” But the data available today are no less frequent than when Congress first conceptualized the use of Schedule A. In fact, there is better data infrastructure today, with more diverse sources, larger sample sizes, better statistical and integrity methods, and more frequent updates. Delays in PERM processing also impose lags comparable to lags in federal data collecting. Nevertheless, remaining lags in data can be accounted for if the Department ensures identified shortage occupations are persistent and the list is not subject to volatile swings. As we discussed above, there is often a tradeoff between granularity of data and frequency of updates, to which the Department has to strike a balance. To the extent lagging data is a concern, the Department can still update the list by considering occupations that have been persistently identified in back-to-back years, or by including lagged variables to ensure identified occupations face structural shortage conditions. Green cards — facilitating permanent immigration — are more conducive to persistent, long-lasting shortages rather than transient ones. Finally, the Department could use an ensemble of sources, incorporating less granular data that is updated more frequently, like the Current Population Survey (CPS).
EPI also criticizes indicator-based methodologies for not being sufficiently robust — in other words, for being sensitive to assumptions. Specifically, they cite fluctuations in the resulting Schedule A list when adjusting parameters on the Institute for Progress’s (IFP) interactive Create Your Own Data-Driven Update to Schedule A tool.[ref 28] We believe they overstate the extent of the sensitivity. As an exercise, we can assign each of the 11 parameter values completely at random to simulate 1,000 Schedule A cases, and compare the resulting lists. What we get is a robust list of core occupations that are relatively insensitive to particular assumptions. For example, electrical and electronics engineers, surgeons, and astronomers and physicists all show up in at least two-thirds of the simulations. Meanwhile, most occupations robustly do not show up in the simulations. Over 88% of occupations show up in fewer than half of our simulations. 20% of occupations, like welding/soldering/brazening workers and waiters and waitresses, do not show up in a single one. If there is strong uncertainty around a parameter, DOL can build robustness checks into its methodology. Additionally, sensitivity may not matter if we have good reasons to choose our particular assumptions. The point is not to randomly assign weights, but to use available evidence to pick reasonable assumptions. Many of the RFI responses provide significant helpful evidence about which assumptions to pick, how to weight different indicators, and so on.
EPI also points to a lack of occupational harmonization across data sources. While we agree that data sources can be improved, a variety of data sources also provides an opportunity to use converging lines of evidence. EPI cites the example of CPS and the Occupational Employment and Wage Statistics (OEWS) using different definitions, noting the crosswalk is not a one-to-one mapping. Ultimately, many government functions have to rely on data that do not have perfect harmonization. We do not think this is an existential threat to evidence-based decision-making.
As to the lack of “bottom-up” data sources, this too poses tradeoffs that the Department should consider. A benefit of including bottom-up sources is that it may ensure the legitimacy of results. However, we note that in conversations with the UK’s Migration Advisory Committee (MAC) members, they cautioned that these are the most costly to produce and analyze, while providing the MAC the least value, since they are produced by biased sources like employers and hard to corroborate and compare.
Benefits of updating Schedule A
It is possible to update Schedule A to address severe labor shortage in the present day, and in fact, updating the list aligns with the intent of Congress when the Immigration and Nationality Act of 1965 was passed. The intent was that DOL should certify that “there are not sufficient workers who are able, willing, qualified, and available at the time of application” (emphasis added).[ref 29] However, in fiscal year 2024, it generally takes 400 to 560 days to complete the entire PERM process, leaving aside the waiting period for petitions and applications at the Department of Homeland Security (DHS) and the Department of State after PERM is completed.[ref 30]
Additionally, a majority of workers getting employment-based green cards are adjusting from a temporary work visa, like an H-1B. When a worker is on an H-1B, it is very difficult to change employers or job duties, leaving these workers more vulnerable to poor treatment. However, under current law, individuals whose adjustment of status to an employment-based green card has been pending for over 180 days are eligible to change employers more easily, but the clock does not start until after DOL certifies the PERM application.[ref 31]
Modernizing Schedule A could support several of the federal government’s priorities. For example, the CHIPS and Science Act of 2022 provided $52 billion in funding to support the growth of the semiconductor manufacturing industry in the United States.[ref 32] The act also spurred $450 billion in private investments to increase U.S. manufacturing capacity.[ref 33] However, for this funding to be successfully deployed, more workers need to be hired. The Semiconductor Industry Association predicts in its comment that if this need is not addressed, 1.4 million new jobs requiring STEM technical proficiency risk going unfilled.[ref 34] The development of quantum computing technologies has also been highly important. Congress and the Trump and Biden Administrations have prioritized quantum sciences with the creation and support of the National Quantum Initiative (as a result of the passage of the National Quantum Initiative Act of 2018,[ref 35] its expansion via the National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2022,[ref 36] and by the CHIPS and Science Act of 2022).[ref 37] [ref 38] However, as the Center for a New American Security[ref 39] and the Quantum Economic Development Consortium (QED-C) note,[ref 40] the difficulty in recruiting enough workers threatens U.S. national security.
This competition for STEM talent is not happening in a vacuum. Competition with China has been a central issue for both the Trump and Biden Administrations. It has also become a concern of Congress, as noted by the House Select Committee on the Chinese Communist Party. In its report Reset, Prevent, Build: A Strategy to Win America’s Economic Competition with the Chinese Communist Party, the committee recommends that Schedule A be updated to “add relevant occupations critical to national security and emerging technology,” and that “Schedule A be updated continuously to reflect the dynamic job market and current market conditions and demands in certain industries.”[ref 41] A modernized Schedule A can help bolster the United States’ position on the global stage, by making the country’s processes to recruit international talent more predictable and certain. The importance of attracting international talent has been recognized in the Biden Administration’s Executive Order on AI and its order for DOL to publish the very RFI we are discussing in detail in this report.[ref 42]
Evidence has already shown that the number of STEM graduates outside of the United States is growing rapidly. The Center for Security and Emerging Technology (CSET) reported that by next year, it is estimated that China will produce more than 77,000 STEM PhD graduates per year compared to about 40,000 in the United States.[ref 43] The Quantum Economic Development Consortium (QED-C) notes that in 2021, the European Union graduated 113,000 students in quantum-relevant fields, versus 55,000 graduates in the United States that same year.[ref 44]
A growing proportion of STEM graduates in the United States are in the country on a temporary visa, and will require a green card if they are to stay and contribute to our scientific and technological goals in the long term. In its comment, the American Physical Society[ref 45] cites another CSET study noting that between 2000 and 2019, foreign-born students accounted for more than 40% of the 500,000 STEM PhDs awarded, and 46% of the doctoral degrees in physics.[ref 46] In 2021, NCSES found that more than half of graduate students studying critical STEM fields, such as AI (50 percent); electrical, electronics, communications, and computer engineering (61 percent); and computer science (66 percent) were on temporary visas.[ref 47] And current immigration policies are actively making it more difficult to attract and retain this talent. One survey of scientists at research-intensive universities found that 90% of respondents agreed and that the unpredictability and uncertainty of the system hindered the development of the scientific workforce and the competitiveness of high-tech industries.[ref 48]
The addition of specific occupations to Schedule A
Many commenters suggested specific occupations be added. We do not endorse or reject any of these occupational recommendations, but are simply reporting on the results collected by DOL. 166 comments provided these suggestions, with 117 suggesting occupations requiring a high level of skills.
The table below outlines the categories in which commenters suggested adding specific occupations. It is important to note that these authors’ analysis categorized recommended occupations into eight groups. Seven are listed in the table below (one group, “Mathematics,” receiving no recommendations). The category “Other non-STEM” included occupations that are not defined as hospitality, healthcare, or construction, such as accountants, teachers, meat and seafood processors, furniture manufacturers, agricultural commodity producers, veterinarians, and founders and/or owners of startups. The category “Other STEM” included occupations such as biomedical researchers, trust and safety roles, and “occupations that are in the United States’ critical national interest.”
Occupational categories recommended in RFI responses
Occupational category | Number of comments | Percentage of comments |
---|---|---|
Science and engineering | 61 | 37% |
Healthcare | 50 | 30% |
Other non-STEM | 38 | 23% |
Information technology | 9 | 5% |
Other STEM | 6 | 4% |
Construction | 4 | 2% |
Hospitality | 1 | 1% |
Within these categories, commenters suggested a wide range of occupations to be added to Schedule A, both high- and low-skill. In STEM, commenters recommended adding:
- Industrial engineers
- Automation and robotics occupations
- Electrical engineers
- Quantum technology workers
- Biomedical researchers
- AI engineers
- Atmospheric and space scientists
- Astronomers and physicists
- Natural science managers
- Environmental engineers
- Workers with degrees or experience in occupation the federal government has designated as critical and emerging technologies
In healthcare, commenters recommended adding:
- Direct care occupations
- Registered nurses
- Nurse practitioners and nurse midwives
- Physical therapists
- Pharmacists
- EMTs and paramedics
- Surgeons
- Psychologists
- Counselors
- Audiologists
- Diagnostic-related technologists and technicians
- Doctors serving in federally designated Medically Underserved Areas/Health Professional Shortage Areas
Other suggested occupations included:
- K-12 teachers
- Construction workers
- Hospitality workers
- Furniture manufacturers
- Agricultural commodity producers
- Veterinarians
- Accountants
- Urban and regional planners
- Training and development managers
- Architectural and engineering managers
- Founders and owners of startups and existing businesses
- L-1B intracompany transferees with specialized knowledge
Recommended methodologies to update Schedule A
Indicator-based approaches
One of the more common types of methodologies that have been developed to assess labor shortage uses economic indicators. These indicators capture many different economic conditions that can exacerbate, or alleviate, excess labor demand. In some cases, an indicator-based approach is combined with a more qualitative method to incorporate feedback from employers, business associations, governments, workers and workers’ rights organizations, and members of the general public. The next section will provide additional details on qualitative, or nominations-based, assessments.
By reading all 2,036 comments submitted to the RFI, we have compiled the 38 economic indicators that commenters recommended be included in labor shortage analysis. The indicators vary in level of granularity and availability, but can be grouped into several buckets. Those buckets are: price data; employment data; talent pipelines; hirings, layoffs, and turnover; migration; and organized labor. The tables for each bucket of indicators can be found in Appendix C, and include the commenters that suggested each indicator, the possible sources of data for each indicator, and their availability.
Some of the most commonly suggested indicators include the retention rate of workers, the average time to fill positions, wage increases at a rate greater than inflation, company level of investment in workforce training, and job-to-job flows to measure how many people switch jobs within an occupational category. The most common suggested indicator was forecasted occupation employment growth relative to total employment growth. Many commenters suggested using the Bureau of Labor Statistics’ (BLS) Employment Projections for this purpose. However, as explained in further detail below, the projections are not meant to be used to forecast labor demand or supply, and are generated under the assumption that there are no shortages of labor.
Many commenters recommended using indicators that would be sourced from internal company data, such as quit rates, the number of qualified applicants responding to job openings, the rate of job offers being declined, and the number of qualified foreign applicants versus U.S. applicants for an occupation. Internal company data could fill informational holes that appear in federal databases, and provide rich context on local and regional talent pipelines.
Nevertheless, there were multiple suggested indicators that could be measured using only publicly available data. BLS’ Job Openings and Labor Turnover Survey (JOLTS) and the Census Bureau’s CPS could satisfy indicators such as the rate of layoffs, number of job openings, number of workers hired versus open positions, unemployment rates, and age of the current workforce. Both proponents and opponents of updating Schedule A suggested that wage trends are among the most reliable indicators of shortage conditions.
An example of a purely indicator-based approach is the Help Wanted Index from IFP.[ref 49] The Index uses ten indicators, weighted evenly, to create a “score” for each of almost 400 occupational categories. Those indicators were chosen to strike a balance that is responsive to both long-term and short-term changes in labor demand, and include percentage change in the median wage over one year and over three years, job vacancy postings per worker, percentage change in employment over one year, percentage change over three years in median weekly paid hours worked, labor force non-participation, income premium, unemployment rate, three-year lagged unemployment rate, and job-to-job transition rate over one year.[ref 50]
Christophe Combemale of Valdos Consulting LLC developed a similar methodology with Andrew Reamer and Jack Karsten of George Washington University, but these authors instead divided up the indicators into four separate labor tests for labor demand, labor supply, workforce training, and worker transitions, which would apply after an occupation is nominated (an approach described in the next subsection).[ref 51] The demand test is intended to map labor demand across the country according to occupations, skills, and educational attainment. Using data from OEWS, online job postings, and state-based training infrastructure, DOL would aggregate labor demand by at least the state level into a central, public map of in-demand occupations. Next, DOL would conduct the supply test to determine if labor demand is being met by the domestic labor market. This test would include an analysis of online job postings data, UI records, real wage data controlled for productivity gains, and data on relative wages to similar occupations. The training test aims to answer whether labor demand can be “credibly met in a timely fashion” by workforce training programs. This test requires that DOL establish a clear time horizon over which training or job transitions might be expected to meet demand, including a reasonable amount of time for the administration of Schedule A. This analysis would require an understanding of the training programs available that could deliver relevant occupational certifications. Information from the Survey of Earned Doctorates (SED) and Workforce Innovation and Opportunity Act (WIOA) training provider data would be useful in the analysis. If the training time required exceeds the amount of time it takes to update Schedule A by a set amount, it would indicate that it might be useful to include that occupation on Schedule A. DOL would then conduct the transition test, asking if labor demand could be “credibly met” if workers transitioned into the examined occupation from occupations with similar skills. This test would be informed by Carnegie Mellon University’s Workforce Supply Chain Initiative.[ref 52] The test would also incorporate data on realized worker transitions from CPS, unemployment insurance (UI) records, and private sources, similarities in skills between the examined occupation and adjacent occupations, and similarities in the occupations’ industries.
Both of these methods, the Help Wanted Index and the four tests, are intended to be conducted by DOL itself. This can be beneficial because DOL has access to a lot of economic data from federal and state governments. However, there is an administrative burden that accompanies such a model and the agency would want to also incorporate outside feedback into their analysis.
To mitigate any administrative burden, other commenters suggested that DOL establish an independent body of experts to develop and carry out the methodology.[ref 53] These experts in both economics and workers’ rights issues, would apply for an appointment to the committee and all of their decisions would be made with public data and be accessible to outside stakeholders. This structure is similar to the MAC’s process that the United Kingdom previously used for its own labor shortage assessments.[ref 54]
DOL already has a committee model that could work well for establishing this body of experts called the Workforce Information Advisory Council (WIAC).[ref 55] Established by WIOA in 2014, the WIAC is a committee of workforce and labor experts who represent “a broad range of national, state, and local data and information users and producers.”[ref 56] The members are appointed by the Secretary for three-year terms and cannot be appointed for more than two consecutive terms.[ref 57] The committee gives advice to the Secretary of Labor to improve the U.S. workforce and labor market information systems and better understand the challenges that workers face in the United States today.
Nominations-based approaches
Beside indicator-based methods, the other major category of methodological recommendations were for processes that are driven by stakeholders requesting that occupations be listed on Schedule A. A nominations-based approach relies on testimony from the organizations that are steeped in labor supply and demand issues and that understand how the labor market is behaving in their local communities.
Many commenters recommended that DOL take inspiration from DHS’ method for updating its STEM Designated Degree Program List.[ref 58] This list includes the fields of study that DHS has determined are STEM. International students applying for a STEM OPT extension, must be studying one of the listed fields to qualify.[ref 59] Members of the public can nominate fields to be added to the list, with an annual feedback deadline of August 1. The Student Exchange Visitor Program (SEVP) within DHS evaluates each nomination to see if the fields are “generally considered to be… STEM degree[s] by recognized authorities, including input from educational institutions, governmental entities and non-governmental entities.”[ref 60] SEVP also examines the National Center for Education Statistics’ (NCES) definitions of the nominated fields and any supporting materials provided by the nominators.
Going a step further, one commenter suggested that all nominations submitted to DOL be posted publicly, and that DOL allow stakeholders to comment on those nominations and provide their own data to support or argue against them.[ref 61] After collecting all of the nominations and feedback, DOL would analyze the submissions, compare nominations with their own data, and issue an updated Schedule A list.
It was also recommended that DOL could develop a process similar to the one used for updating Appendix A to the Preamble-Education and Training Categories by O*NET-Standard Occupational Classification (SOC).[ref 62] In 2021, DOL issued a Federal Register Notice announcing that it was planning on updating the appendix to better mesh with the 2018 SOC codes and the O*NET database.[ref 63] The announcement gave stakeholders an opportunity to provide information to DOL about how professional and non-professional job titles have changed over time, and established a predictable cadence for future updates.
DOL also received recommendations that it look at its own past processes to update Schedule A efficiently. Specifically, DOL could take inspiration from its Reduction in Recruitment (RIR) standards to help determine which occupational categories should be eligible for Schedule A.[ref 64] When RIR was active, employers reached out to their respective state workforce agencies and informed agencies that they had already tried to recruit for an occupation and were unsuccessful. The employers would submit evidence of their past recruitment and ask for RIR. Unlike the current system, the state workforce agencies would make the decision as to whether an employer could use RIR. To help modernize Schedule A, DOL could have the states identify which occupations did not have enough U.S. workers as they best understand the labor needs in their local communities.
Lastly, a subset of recommended nomination-based approaches build upon DOL’s current labor certification systems for green cards and H-1Bs. For example, a group of commenters recommended examining PERM data and adding any occupations to Schedule A that are approved 98% of the time when employers submit PERM applications.[ref 65] Such a method relies on data that DOL already knows well, and does not require additional data collection. It also allows the list to be quickly updated in predictable intervals. Another recommendation is that only a true labor market test at the initial point of hire is the best way to identify labor shortage. This test would occur at the time of filing either an I-129 petition for a nonimmigrant worker or an I-140 immigrant petition for a foreign worker, and would require that employers provide proof of good faith, real-world recruitment efforts.[ref 66] This method, like the current processes for PERM and Labor Condition Applications (LCAs), puts much of the administrative burden on the employer to prove that they were unable to find U.S. candidates. It could also modernize current processes by removing the requirement to publish job advertisements in the newspaper. However, the commenter did not include a recommendation for the threshold that would qualify an occupation as scarce.
Other recommendations
Commenters did not just provide feedback on how DOL could update Schedule A. They also proposed ways that the agency could improve other parts of the employment-based immigration process.
Besides PERM, the other major piece of the DOL process for obtaining an employment-based green card is the Prevailing Wage Determination (PWD). Employers are required to pay foreign workers the prevailing wage, or “the average wage paid to similarly employed workers,” for the occupational category in the area that the job will take place.[ref 67] Unfortunately, obtaining a PWD can take months. At the beginning of August 2024, DOL stated that it was processing PWDs that were submitted between eight and 11 months prior.[ref 68] One way commenters thought DOL could shorten PWD process times would be to offer expedited processing for a fee, similar to the premium processing system used by the U.S. Citizenship and Immigration Services (USCIS).[ref 69] For certain forms, USCIS will process within 15 business days when paid a premium processing fee. If the agency is unable to meet that deadline, the fee is returned to the petitioner.[ref 70] This fee helps pay for additional processing capacity at USCIS and, if implemented at DOL, could do the same. Another idea is to create an expedited process for obtaining PWDs that is similar to DOL’s processing of LCAs for H-1Bs.[ref 71] Once submitted, DOL processes LCAs within seven business days.
One commenter recommended that DOL expand special handling to permit employers to use previously completed competitive recruitment to hire workers with a STEM Master’s or PhD from a U.S. institution.[ref 72] Special handling is currently only available for colleges and universities hiring foreign workers for teaching positions, allowing them to satisfy the labor certification process with a recruitment that looks for the most qualified candidate for the position.[ref 73] In contrast, the typical PERM process requires that employers look for a “minimally qualified” U.S. candidate.
Many of the commenters discussed how DOL should collect outside feedback for a regular update of Schedule A.[ref 74] Historically, DOL issued a RFI or a Notice of Proposed Rulemaking (NPRM) every time that it has updated Schedule A, but RFIs are time-consuming to issue, and repeatedly going through the rulemaking process would be highly burdensome to the overburdened staff at DOL. One commenter suggested that DOL issue one NPRM to lay out the entire process for how DOL would collect stakeholder feedback, evaluate data, and publish updates to Schedule A. Then, all future updates would be made without notice-and-comment rulemaking. The recommended intervals for updates were every two, three, or five years.[ref 75] Regularly updating Schedule A would provide much-needed predictability and certainty to the system for both employers and employees. It would also avoid the situation that DOL is currently experiencing, in which Schedule A is over 30 years out of date.
Recommended worker protections
A common concern among workers’ rights advocates is that the expansion of the Schedule A list could lead to increased rates of fraud and abuse.[ref 76] Foreign nurses, who have been eligible for Schedule A for decades, have been the victims of well-documented abuses, particularly by staffing firms. The National Employment Lawyers Association (NELA) and the National Institute for Workers’ Rights (NIWR) cite several court cases about these abuses in their comment.[ref 77] When recruiting foreign nurses, the sponsors assume some expenses, such as visa processing fees, licensing exam fees, and airfare. Some firms try to recoup those costs by including “training repayment provisions” or “stay-or-pay provisions” in the nurses’ contracts.[ref 78] These provisions require that the foreign nurse work for the staffing firm’s clients for a certain number of years, or else the nurse is required to pay sometimes upward of $100,000 plus the firm’s legal fees as a penalty.[ref 79] Any foreign nurse that comes into the United States with Schedule A is given a green card, which affords the nurses all the same working rights as U.S. citizens, including the ability to change jobs whenever they want. Workers’ advocates argue that the “stay-or-pay provision” significantly infringes on the foreign nurses’ working rights and traps them in jobs that are unsafe or abusive. There are other common contractual provisions in foreign nurse contracts that can lead to abuse, including lengthy non-competes, forced arbitration, and non-disclosure or confidentiality agreements with hefty penalties if the foreign nurse speaks out about unsafe or abusive working conditions.
Workers’ rights advocates knowledgeable about the abuses foreign nurses have experienced recommend in their comments that DOL introduce guard rails to Schedule A to reduce the likelihood that future nurses (and any other future workers coming to the country as a result of the list) are mistreated by their employers. NELA and NIWR recommended that DOL state clearly on its website and in its informational materials about Schedule A that provisions with a high likelihood for abuse, such as non-competes, forced arbitration, non-disclosure agreements, breach or penalty fees, or training repayment requirements, are not permitted in the employment contracts for Schedule A workers.[ref 80] They also recommend that fact sheets be provided to Schedule A foreign workers informing them of their working rights at many points during the recruitment and hiring process, including at embassy interviews, upon receipt of an offer letter, and also as part of the employment contract. If an employer fails to comply with these requirements, NELA and NIWR recommend that DOL’s Office of Foreign Labor Certification (OFLC) bars those employers from using Schedule A in the future.[ref 81]
Other commenters suggested additional guard rails that could be applicable to any occupation. The recommendations include:
- Requiring employers to submit documentation to DOL’s Employment and Training Administration (ETA) before being eligible to hire through Schedule A. The documentation would attest that the company has been in compliance with PERM regulations for the past five years.[ref 82]
- Requiring that before an employer is eligible to hire through Schedule A, they maintain a workforce in which more than half of the workers are U.S. workers.[ref 83]
- Requiring employers to advertise their Schedule A-eligible jobs on the internet, instead of in a newspaper, like a traditional PERM application.[ref 84]
Another suggestion some commenters make is to apply Schedule A only to petitions filed for OEWS Level 3 and Level 4 jobs to ensure all Schedule A beneficiaries are paid more than the median wage for their occupation.[ref 85] EPI points out that most PERMs are for Level I and II jobs and hence will be below the median wage. While fixing the PWD system is outside the scope of Schedule A reform, a Schedule A update could feasibly take this into account in a number of ways. For example, the Department could use the ACS or other data sources that include age and work history data to disaggregate occupations by inferred level of experience.
However, a problem with limiting Schedule A to Level 3 and Level 4 is that it assumes that immigrants should be paid more than similarly situated Americans. Median wages for an occupation are simply not the market wage for all individual jobs, and may be above the market wage for early-career jobs. DOL should consider whether lower experience Level 1 or Level 2 jobs in occupations with sufficiently rising wages and other indicators of labor shortage should require PERMs to protect workers. The median wage for an occupation is not the market wage for an individual job.
Sources of data
A diverse group of 21 commenters thought deeply about which data sources could be useful for DOL in an assessment of labor shortage. These recommendations were submitted by universities, professional and industry associations, state-based nonprofits, research-based nonprofits, think tanks, companies, labor unions, and one anonymous contributor. This section will identify each data source recommended by these commenters and describe their scope and constraints.
Data sources suggested in RFI responses
Data Source | Who produced it? | Geographic granularity | Frequency of update | Lag of update (approximate average) | Focus |
---|---|---|---|---|---|
QCEW | BLS | MSA, county, state, national by industry | Quarterly | Three quarters | Businesses and workers |
CES | BLS | National by industry | Monthly | One month | Workers |
NIPA | BEA | National by industry | Annually | Six months | National economic output |
RIMS II | BEA | Combined Statistical Area, Metropolitan Statistical Area, Metropolitan Division, County | Variable | Variable | Jobs and labor earnings |
EPs | BLS | National by occupation | Annually | One year | Job growth |
CPS | Census | National | Monthly | One month | Workers |
BTOS | Census | National | Biweekly | Two weeks | Employers |
ACS | Census | National | Annually | One year | Workers |
PSEO | Census | Select states, coverage varies | Annually | Two to three years | University graduates |
NSCG | NSF | National | Biennially | One to two years | University graduates |
NTEWS | NSF | National | Biennially | Two years | STW |
SED | NSF | National | Annually | Two years | Recent research PhDs |
SDR | NSF | National | Biennially | Three years | Research PhDs under 76 years of age |
Science and Engineering Indicators | NSF | National | Biennially | Three years | STEM fields |
TSA Reports | ED | State | Annually | Four years | Public school districts |
UI wage records | States & BLS | State | Quarterly | One year | Workers |
In-demand occupation lists | Local workforce boards | State and local | Variable | Variable | Jobs |
Future of Jobs Report | WEF | Global | Biennially | One year | Jobs and skills |
JEDx | Chamber of Commerce & T3 Innovation Network | National | Unknown | Unknown | Jobs |
Federal data sources
Department of Labor
Quarterly Census of Employment and Wages (QCEW)
Conducted by the BLS, the QCEW publishes each quarter a count of employment and wages as reported by employers. The data is available at county, state, national, and Metropolitan Statistical Area (MSA) levels by industry as classified by the North American Industry Classification System (NAICS).[ref 86] It measures the number of businesses, number of workers, and quarterly wages for positions covered by state UI and for federal positions covered by the Unemployment Compensation for Federal Employees program.[ref 87] Several categories of workers are excluded from this survey, including business owners, unpaid family members, certain farm and domestic workers, certain railroad workers, elected officials in the Executive and Legislative branches, members of the armed forces, and workers who have earned no wages during the survey period because of “work stoppages, temporary layoffs, illness, or unpaid vacations.”[ref 88] This survey was suggested by Combemale, Reamer, and Karsten.[ref 89]
One of the major benefits of the QCEW is that it produces new data frequently and throughout the year. This can be helpful because much labor data is published annually and many months after the survey period has closed. The survey has also been running for many decades, with the data being classified by industry since 1938 and can be broken down into county-level sections. The geographic granularity would be an asset to labor shortage evaluations because economies are very diverse across the country. However, the fact that the survey is only classified by industry is a drawback to an assessment of labor shortage at the occupational level.
Current Employment Statistics (CES)
Recommended by Combemale, Reamer, and Karsten, the CES is a monthly survey that records the employment, hours, and earnings estimates of workers in nonfarm jobs. It is based on employers’ payroll records.[ref 90] The richest data has been published by BLS since 1990, but aggregate industry data can be found back to 1939. Employment rates are published for both the private and public sectors, but hours and earnings are only published for the private sector. The survey encompasses about 119,000 businesses and government agencies with about 629,000 individual worksites in the United States.[ref 91] CES data is classified according to NAICS.
Similar to QCEW, CES publishes frequently and is, according to BLS, the “first economic indicator of current economic trends each month.”[ref 92] It is capable of measuring several indicators which would be important to an assessment of labor shortage, including earnings trends and wage-push inflation, short-term fluctuations in demand, and levels of industrial production. But, also like QCEW, the survey collects data only at the industry and national levels
Employment Projections
The DOL Employment Projections (EP) are developed annually by BLS and estimates what the U.S. labor market will look like ten years in the future.[ref 93] The EPs cover 300 industries and 800 detailed occupations and are based on data from OEWS, CPS, and CES surveys. BLS classifies the occupations in the EPs using the 2018 SOC codes and classifies industries using NAICS. BLS creates each projection by examining the labor force, output, and other economic measures by consumer sector and product, industry output, employment by industry, and employment by occupation.
Many commenters cited BLS’ EPs in their arguments, and several (including Combemale, Reamer, and Karsten,[ref 94] the Bipartisan Policy Center,[ref 95] and the U.S. Chamber of Commerce)suggested using them in a labor shortage assessment.[ref 96] While it is understandable that commenters gravitated toward this data source to argue both for and against a method to assess labor shortage, BLS itself makes it clear that these projections cannot be used to predict labor shortages or surpluses.[ref 97] BLS explains that the EPs assume the labor market is in equilibrium, “where overall labor supply meets labor demand except for some degree of frictional unemployment.”[ref 98] The agency continues by noting that the urge to predict shortages or surpluses with the projections comes from an incorrect comparison of total employment and total labor force projections: “The total employment projection is a count of jobs and the labor force projection is a count of individuals. Users of these data should not assume that the difference between the projected increase in the labor force and the projected increase in employment implies a labor shortage or surplus.”[ref 99] For this reason, EPs were not included in the Institute for Progress’s Help Wanted Index,[ref 100] and we do not recommend using them in a DOL assessment of labor shortage.
Department of Commerce
National Income and Product Accounts (NIPAs)
The NIPAs are published by the Department of Commerce’s Bureau of Economic Analysis (BEA). The NIPAs are part of a trio of U.S. national economic accounts which also include the Industry Economic Accounts (IEAs) and the Financial Accounts of the United States. BEA uses the NIPAs to answer three questions:
- What is the output of the economy and its size, composition, and use?
- What are the sources and uses of national income?
- What are the sources of savings to provide for investment in future production?[ref 101]
As part of the NIPAs, BEA estimates the number of workers in each industry every year at the state, county, metropolitan, and micropolitan level.[ref 102] NIPA data is available back to the 1940s.
A benefit of the NIPA estimates is that they are commonly used to evaluate the condition of the U.S. economy.[ref 103] A drawback is that they do not focus on the condition of the U.S. labor market and only make assessments at the national level.
Employment Multipliers
Combemale, Reamer, and Karsten suggest use of BEA’s employment multipliers.[ref 104] These multipliers are calculated by BEA to relate how much regional spending translates into jobs. Based on its Regional Input-Output Modeling Service II, the department can estimate how spending changes output, employment, labor earnings, and ultimately labor demand.
Census Bureau
Current Population Survey (CPS)
Recommended by Combermale, Reamer, and Karsten,[ref 105] IFP,[ref 106] and the American Immigration Lawyers Association (AILA) and American Immigration Council (AIC),[ref 107] the CPS is a joint effort between the Census Bureau and BLS. It is one of the oldest and biggest surveys in the United States and measures vital monthly labor force statistics.[ref 108] It measures a slew of aspects of the U.S. population, such as school enrollment rates, median annual earnings by field, health insurance coverage, poverty rates, populations of various minorities, educational attainment, voting registration rates, and fertility.[ref 109] To collect all this data, the Census Bureau surveys 60,000 households each month.
When attempting to evaluate labor shortage, using CPS data has some important benefits. Data is published much more frequently than other economic surveys and there are many years of historical data publicly available. It also, as stated above, collects data about many different pieces of the labor market, giving researchers a wide, detailed view of potential indicators of labor shortage.
However, several commenters pointed out shortcomings of CPS data. The AFL-CIO Department for Professional Employees noted that CPS data does not capture state, regional, and national trends for the STEM educational pipeline and workforce specifically.[ref 110] EPI states that there is not enough harmonization between CPS and other agency data sources, like OEWS, ACS, and CES.[ref 111] CPS and these other sources use slightly different SOC codes for occupational categories and try to remedy these differences with crosswalk documents.[ref 112] Nevertheless, the crosswalks do not provide a one-to-one mapping between the different occupational categories and that can create difficulties in conducting labor market analyses that incorporate some or all of these data sources. EPI also states in its comment that CPS occupational categories do not have clear definitional boundaries. For example, there are some catch-all categories in CPS data, such as “Computer Occupations, All Other,” or “Engineers, All Other.” EPI notes that 18% of computing occupations fall under the former category and 25% of engineers fall under the latter, a significant amount that can hamper the ability to properly assess labor needs in specific occupations. Additionally, the categorization system for OEWS conflicts with CPS’ “All Other” buckets, with the percentage of occupations in both of the OEWS “All Other” categories for computing and engineering is only 9%.
Business Trends and Outlook Survey (BTOS)
The Bipartisan Policy Center (BPC) recommended incorporating the BTOS into a methodology to update Schedule A.[ref 113] The BTOS was launched in 2022 and is designed “to measure business conditions on an ongoing basis.”[ref 114] It improves upon the Small Business Pulse Survey, which measured changing business conditions during major events, like hurricanes and the COVID-19 pandemic. It collects data from about 1.2 million single-location employer businesses (except for farms) every two weeks on the sector, state, and metropolitan statistical area level.[ref 115] Employers are asked to reflect on the growth of their business over the past two weeks and estimate the business’ performance over the next six months.
BTOS data could be useful for an analysis of labor shortage because it takes into account the employers’ opinions about the success of their businesses and of near- and medium-term challenges. However, the data is all self-reported, and the survey is entirely voluntary. It is also a very new survey, having been launched just two years ago, so there is not much historical data to analyze. Lastly, all of the questions asked in the survey are qualitative and less able to be integrated easily into a methodology that relies on quantitative data.
American Community Survey (ACS)
The ACS publishes annually and covers a wide variety of aspects of the U.S. population. Some of the topics ACS covers include computer and internet use, citizenship status, educational attainment, fertility, undergraduate degree field, school enrollment, income, poverty status, employment status, industry, occupation, race, age, and sex, among others.[ref 116] The Census Bureau contacts over 3.5 million households in the United States to gather data.[ref 117] In the 20th century, the Census was divided into a short form and a long form, which was only given to a subset of the population. After 2000, the long form of the Census became the annual ACS.[ref 118]
Use of the ACS was recommended by several commenters to the RFI, such as IFP,[ref 119] CIS,[ref 120] Combemale, Reamer, and Karsten,[ref 121] and AILA and AIC.[ref 122] ACS data is highly detailed and measures many of the economic and educational indicators which would be important in an analysis of labor shortage. Its annual publication and almost 25 years of data also give researchers the opportunity to reliably assess economic conditions over many years and through several economic crises, like the COVID-19 pandemic and the Great Recession. While it has a larger sample size and more detailed information than many other sources, it is not updated as frequently or current as other sources (like the CPS), which prevents it from being an up-to-the-month or up-to-the year snapshot.
Post-Secondary Employment Outcomes (PSEO)
Recommended by Combemale, Reamer, and Karsten,[ref 123] PSEO data is generated via a partnership between Census Bureau researchers, universities, university systems, state departments of education, and state labor market information offices.[ref 124] It looks at the employment outcomes and earnings of university graduates by degree level, area of study, institution of higher education, and state. Data is generated by pairing university transcripts with a national database of jobs.[ref 125]
This project captures information about a very important aspect of the labor market: how successful graduates are at getting jobs in their fields of study and if their earnings are growing. Growth of wages, especially of recent graduates, is important to many of the RFI commenters, including EPI,[ref 126] the Center for Immigration Studies,[ref 127] AFL-CIO[ref 128] and their Department for Professional Employees,[ref 129] and IFPTE.[ref 130]
However, the major drawback to using PSEO data is that it does not cover the whole United States or even every university in the states covered by the survey. In fact, it only includes 28 states and coverage within states greatly varies. Rhode Island has the least coverage, with only 4% of the state’s university graduates included. Virginia has the greatest coverage at 87% of graduates.[ref 131] Additionally, PSEO data only includes graduates who have earned “at least the annual equivalent of full-time work at the prevailing federal minimum wage” and have worked “three or more quarters in a calendar year.”[ref 132]
National Science Foundation (NSF)
National Survey of College Graduates (NSCG)
NSF’s National Center for Science and Engineering Statistics (NCSES) partners with the Census Bureau to carry out this survey. The RAND Corporation recommended its use in labor shortage analyses.[ref 133] It is published every other year, with the next one being released January 2025.[ref 134] The survey collects data from college graduates living in the United States during the survey week who have at least a bachelor’s degree and are younger than 76.[ref 135] It began collecting data in 1993 and, as of 2021, sampled about 164,000 people. The NSCG looks at the demographics of college graduates, their educational history, employment status, degree field, and occupation.
The benefits of using NSCG data for an analysis of labor shortage include the fact that it has a focus on STEM occupations in particular, and that it includes detailed data about all factors related to the STEM talent pipeline. NSF compiles tables of graduates by major, employment status, demographics (including citizenship status), earnings, and even job satisfaction. The job satisfaction data could contribute to better understanding of STEM workers’ experiences in the labor force. As several commenters suggest,[ref 136] if workers are not satisfied with their jobs (low pay, poor working conditions, limited career mobility, etc.) and that sector is experiencing unmet labor demand, that could indicate that the sector could do more to attract workers before turning to hiring international workers.
There are a few drawbacks to using this survey. It only publishes data every other year, which can make it difficult to accurately assess labor shortage on a more frequent basis. It also does not survey individuals who either have some college education or professional certifications, otherwise known as the skilled technical workforce (STW). Occupational categories are also not as granular as in other surveys, including categories such as “biological, agricultural, and other life scientists,” “computer and mathematical scientists,” “management and administration fields,” and “health.”
National Training, Education, and Workforce Survey (NTEWS)
This survey, launched in 2022 and recommended by the RAND Corporation,[ref 137] collects data on people 16 to 75 years old and focuses especially on the skilled technical workforce (workers with some college, or professional certifications).[ref 138] It examines work experience programs, types of credentials, employment characteristics, demographic characteristics, and education enrollment and attainment. NSF uses the data to evaluate the relationship between workers’ credentials and their employment outcomes. It will be published every other year and is meant to supplement the NSCG and the Survey of Doctorate Recipients.[ref 139] The sample size is about 43,200 people. NSF expects to publish the first tranche of data in December 2024.
One major benefit of this survey is that it covers STW occupations exclusively. These jobs play a huge role in the success of the U.S. economy. However, the survey has not published its first set of data yet, so there could be some unforeseen operational, thematic, and coverage hurdles to iron out in the coming years before it is useful for labor shortage analyses.
Survey of Earned Doctorates (SED)
The SED is recommended by Combemale, Reamer, and Karsten for future labor shortage analyses.[ref 140] It is an annual census of all recipients of research doctorates from U.S. institutions of higher education. The SED collects information on recipients’ educational history, graduate funding sources, educational debts, plans post graduation, and demographic data, including citizenship status.[ref 141] NCSES, in partnership with the National Institutes of Health (NIH), the Department of Education (ED), and the National Endowment for the Humanities (NEH), have conducted this survey since 1957. The sample size each year is about 50,000 people.
One of the benefits of the SED is that it has many decades of historical data. It also surveys all recipients of research doctorates in the United States, not just a subset of people like many of the other surveys detailed in this section. It also covers a population of people who presumably are going into occupations that require a lot of training and time for which to prepare. If this data flags indications of labor shortage, it would be highly useful for a methodology to update Schedule A. Occupations with very long lead times may be the best suited for hiring internationally to address excess labor demand.
A drawback of this survey is that because it looks only at doctorate recipients, it evaluates only a very small subset of the U.S. workforce.
Survey of Doctoral Recipients (SDR)
Recommended by Combemale, Reamer, and Karsten,[ref 142] the SDR provides specific data on characteristics of science, engineering, and health research doctorate recipients from U.S. institutions who are under the age of 76.[ref 143] NSF partners with NIH to collect information such as recipients’ educational history, employment status, degree field, occupation, and demographic information. This survey is published every other year and has been conducted since 1973.[ref 144] For its last iteration, the SDR’s sample size was 125,938 people.
The pros and cons of this survey are similar to that of the SED. One major difference is that the SED collects information on a broader range of degree fields, both STEM and non-STEM, than the SDR.
Science and Engineering Indicators report
The Science and Engineering Indicators were recommended by the National Science Board (NSB).[ref 145] The NSB serves as part of the leadership of NSF and “identifies issues that are critical to NSF’s future, approves NSF’s strategic budget directions and the annual budget submission to the Office of Management and Budget, and approves new major programs and awards.” It also acts “as an independent body of advisors to both the President and the Congress on policy matters related to science and engineering and education in science and engineering.”[ref 146] The NSB compiles detailed reports as part of the Science and Engineering Indicators about the scope and vitality of STEM fields in the United States.[ref 147] The reports include:
- Elementary and Secondary STEM Education;
- Higher Education in Science and Engineering;
- The STEM Labor Force: Scientists, Engineers, and Skilled Technical Workers;
- Research and Development: U.S. Trends and International Comparisons;
- Publications Output: U.S. and International Trends;
- Academic Research and Development;
- Invention, Knowledge Transfer, and Innovation;
- Production and Trade of Knowledge- and Technology-Intensive Industries; and
- Science and Technology: Public Perceptions, Awareness, and Information Sources.
To compile the reports, NSB integrates information collected in surveys conducted by national statistical agencies and by other countries.[ref 148] Some of the data are collected by companies, governments, and private organizations as part of their internal activities.
The Indicators reports contain extensive detail about the STEM educational pipeline and workforce, and include information that would not otherwise be available for researchers. It also includes international data, which is rare among other U.S. statistical agencies’ surveys.
However, these reports do not forecast future outcomes in STEM and do not model the dynamics of science and engineering sectors.[ref 149] The reports are also only published every other year.
Department of Education
Teacher Shortage Area (TSA) Reports
The Office of Postsecondary Education collects data from state representatives to develop the TSA reports. ED intends for these reports to be used by incoming education workers where school districts may be hiring new faculty, administrators, and other educators across the country.[ref 150] The reports are published each school year and include a wide range of occupational focus areas, such as core subjects, drivers education, world languages, English as a second language, and special education. This data source was recommended by the Chicago Public Schools.[ref 151]
One benefit is that there is TSA historical data going back to 1990 for every state and territory in the United States. However, this data does not track the labor needs of institutions of higher education.
State-based data sources
State wage records
State Unemployment Insurance offices collect wage information to aid in providing unemployment benefits to workers. This data, collected by BLS quarterly, is a rich source of information about how much people are being paid and their employers within each participating state.[ref 152] There are currently 30 states participating in the Wage Records Program.[ref 153] While research has been conducted with this data, it is not publicly available on BLS’ website.[ref 154] This source was recommended by Combemale, Reamer, and Karsten.[ref 155]
Workforce board in-demand occupation lists
As recommended by Combemale, Reamer, and Karsten, “local workforce boards compile lists of occupations that meet in-demand criteria based on employment and wage growth” in order to receive federal workforce funding.[ref 156] These lists are supplemented by knowledge of local labor markets and sometimes nominations from employers. These sources of data could be compiled by DOL and displayed online as a map of which occupations are considered in demand in each state and provide valuable information to labor researchers. However, as it is not currently aggregated, significant preparation would be needed before this analysis is possible.
Private data sources
World Economic Forum (WEF) Future of Jobs Report
This report was launched by WEF in 2016 and it “explores how jobs and skills will evolve” on a global scale. It is “based on unique survey data that details the expectations of a cross-section of the world’s largest employers related to how socioeconomic and technology trends will shape the workplace of the future.”[ref 157] The report is published roughly every other year, with the most recent being published April 2023. Unlike many other data sources described above, this source does include actual projections of job creation, displacement, and specific disruptions to skills in the near future. The fact that it is also a global assessment can be a valuable supplement to an analysis of labor shortage in the United States and how it could be impacted by the international economy. However, the information is not detailed enough to be incorporated into an analytical, quantitative process for assessing labor shortage. This source was recommended by BPC.[ref 158]
Jobs and Employment Data Exchange (JEDx)
Recommended by Combemale, Reamer, and Karsten,[ref 159] JEDx aims to develop “a public-private approach for collecting and using standards-based jobs and employment data.”[ref 160] It is organized by the U.S. Chamber of Commerce Foundation and the T3 Innovation Network. While the goals of JEDx are promising for future labor shortage analyses, the initiative is still developing a roadmap and has not released any data as of the writing of this report.
Improvements to existing federal data sources
In addition to suggesting specific data sources, commenters had recommendations for how existing federal data sources could be improved for labor shortage analyses. If DOL wishes to use existing federal data sources, the agency can take some of the following steps recommended by various commenters to improve data collection and granularity.
- Expand JOLTS to collect monthly data at the occupational level[ref 161]
- Partner with the Census Bureau to track state, regional, and national trends in how well students are finding jobs that align with their education and skills and the number of students pursuing different fields to gauge potential future supply of workers[ref 162]
- Expand BLS’ partnerships with other federal and state agencies, such as the state workforce boards and the Census Bureau to add additional questions to the data they already collect, including occupational data in unemployment insurance wage records,[ref 163] and occupational questions to Census surveys[ref 164]
The agency could also go further by developing new sources of data to improve understanding of labor needs. Some suggestions include:
- Creating a survey of state Medicaid agencies to gauge demand for healthcare workers and the state workforce development agencies, which are in charge of local workforce training programs.[ref 165]
- Creating surveys that ask companies and workers’ rights groups about their internal data, including worker turnover, attrition, and retention issues, investments in workforce training and career path development for current workers, the time it takes to hire a new worker and how many days an open job remains unfilled, and worker benefits such as signing bonuses.[ref 166]
Defining STEM
One of the first questions that DOL asks in the RFI is how it can define STEM and which occupations should be included under the STEM umbrella. The inclusion of this question likely originates in the text of the AI Executive Order which requested DOL to examine “AI and other STEM-related occupations… across the economy, for which there is an insufficient number of ready, willing, able, and qualified United States workers.”[ref 167]
Of the comments that answered this question, a majority recommended that DOL define STEM broadly, using straightforward assessments to determine which occupations are STEM. The recommended assessments would define an occupation as STEM if:
- It contributes to domestic advancement of critical and emerging technologies;
- It uses “significant” levels of technical and science and engineering knowledge and do not require a bachelor’s degree;
- It is integral to scientific research and development;
- The skills needed to do the job align with those identified by broad surveys of job openings and professional profiles, such as those found on LinkedIn; or
- If it requires a degree field identified by the STEM Optional Practical Training (OPT) Designated Degree Program list.[ref 168]
Others recommended either a dramatic narrowing of what is considered STEM or a very specific recommendation. For example, one commenter urged DOL to include all occupations who are categorized by the BLS SOC code starting with 29, which are “Healthcare Practitioners and Technical Occupations.[ref 169]
Conclusion
This paper aims to provide a sensible analysis of all of the options that are available to DOL as they consider how to modernize Schedule A. We believe it clearly demonstrates that there exist many high-quality options. Stakeholders from across the country have provided recommendations for how every aspect of Schedule A could be improved, in addition to recommendations for other parts of the employment-based green card process at DOL. Perhaps equally importantly, several commenters have suggested highly actionable guard rails that DOL could implement to reduce the possibility of fraud and abuse of Schedule A-eligible workers. With this treasure trove of information, we have the best opportunity in decades to develop a transparent, data-driven process to understand where there are labor gaps in the U.S. economy and how we can best use our federal, state, and local resources to not only supplement our workforce with international talent but also strengthen domestic training and reskilling pipelines to ensure that good jobs are accessible to as many Americans as possible. We encourage DOL to carefully examine the feedback it has received as part of this RFI process, to issue a Notice of Proposed Rulemaking to outline how it will modernize Schedule A, and to update it regularly into the future.
Acknowledgments
We would like to thank Amy Nice, Andrew Moriarity, Barbara Leen, Cecilia Esterline, Matthew La Corte, Greg Wright, Jack Malde, Sharvari Dalal-Dheini, Steven Hubbard, and Leslie Dellon for their support and sage advice in preparation for (and during the drafting of) this report.
Appendix A: Acronyms and their definitions
ABC = Associated Builders and Contractors
ACS = American Community Survey
AFL-CIO = American Federation of Labor-Congress of Industrial Organizations
AI = Artificial Intelligence
AIC = American Immigration Council
AILA = American Immigration Lawyers Association
ASU = Arizona State University
BEA = Bureau of Economic Analysis
BLS = Bureau of Labor Statistics
BPC = Bipartisan Policy Center
BTOS = Business Trends and Outlook Survey
CES = Current Employment Statistics
CIS = Center for Immigration Studies
CPA = Certified Public Accountant
CPI = Consumer Price Index
CPS = Current Population Survey
DHS = Department of Homeland Security
DOL = Department of Labor
ED = Department of Education
EO = Executive Order
EP = Employment Projections
EPI = Economic Policy Institute
ETA = Employment and Training Administration
GSS = Survey of Graduate Students and Postdoctorates in Science and Engineering
HHS = Department of Health and Human Services
HSI = Homeland Security Investigations
IEA = Industry Economic Accounts
IFPTE = International Federation of Professional and Technical Engineers
IPEDS = Integrated Postsecondary Education Data System
IfSPP = Institute for Sound Public Policy
I-O = Input-Output Accounts
JEDx = Jobs and Employment Data Exchange
JOLTS = Job Openings and Labor Turnover Survey
LCA = Labor Condition Application
LPR = Legal Permanent Resident
MSA = Metropolitan Statistical Area
NAICS = North American Industry Classification System
NCES = National Center for Education Statistics
NCSES = National Center for Science and Engineering Statistics
NELA = National Employment Lawyers Association
NIPA = National Income and Product Account
NIWR = National Institute for Workers’ Rights
NSB = National Science Board
NSCG = National Survey of College Graduates
NSF = National Science Foundation
NTEWS = National Training, Education, and Workforce Survey
OEWS = Occupational Employment and Wage Statistics
OFLC = Office of Foreign Labor Certification
OPT = Optional Practical Training
PERM = Permanent Labor Certification
PSEO = Post-Secondary Employment Outcomes
PWD = Prevailing Wage Determination
QCEW = Quarterly Census of Employment and Wages
QED-C = Quantum Economic Development Consortium
RFI = Request for Information
RIMS II = Regional Input-Output Modeling Service II
SDR = Survey of Doctoral Recipients
SED = Survey of Earned Doctorates
SEVIS = Student and Exchange Visitor Information System
SEVP = Student and Exchange Visitor Program
SOC = Standard Occupational Classification
STEM = Science, Technology, Engineering, and Mathematics
STW = Skilled Technical Workforce
TSA = Teacher Shortage Area
UI = Unemployment Insurance
WEF = World Economic Forum
WIAC = Workforce Information Advisory Council
WIOA = Workforce Innovation and Opportunity Act
Appendix B: Questions asked in DOL’s RFI on the modernization of Schedule A
DOL requested comments concerning generally:
- “Whether any STEM occupations should be added to Schedule A, and why; and
- Defining and determining which occupations should be considered as falling under the umbrella of STEM, and why.”
DOL also requested specific information regarding the following questions:
- “Besides the OEWS, ACS, and CPS, what other appropriate sources of data are available that can be used to determine or forecast potential labor shortages for STEM occupations by occupation and geographic area?
- What methods are available that can be used alone, or in conjunction with other methods, to measure presence and severity of labor shortages for STEM occupations by occupation and geographic area?
- How could the Department establish a reliable, objective, and transparent methodology for identifying STEM occupations with significant shortages of workers that should be added to Schedule A?
- Should the STEM occupations potentially added to Schedule A be limited to those OEWS occupations used in most of the recent BLS publications, or should the STEM occupations be expanded to include additional occupations that cover STW occupations?
Beyond the parameters discussed for STW occupations, should the Department expand Schedule A to include other non-STEM occupations? If so, what should the Department consider to establish a reliable, objective, and transparent methodology for identifying non-STEM occupations with a significant shortage of workers that should be added to or removed from Schedule A?”[ref 170]
Appendix C: Suggested economic indicators by category
Price data
Economic indicator | Suggested by | Possible data sources | Data availability |
---|---|---|---|
Wage increases at a rate greater than inflation | AFL-CIO Department for Professional Employees QED-C AILA & AIC EPI CIS Combemale, Reamer, & Karsten AFL-CIO | ACS, CPS | Public |
Current salaries of active workers to determine level of industry competition | Engine Advocacy | ACS, CPS | Public |
Consumer Price Index | ABC | CPI | Public |
Percentage change in the median wage over one year | IFP | ACS, CPS | Public |
Percentage change in the median wage over three years | IFP | ACS, CPS | Public |
Percentage change in median paid hours worked over three years | IFP | ACS, CPS | Public |
Income premium | IFP | ACS, CPS | Public |
Non-listed internal positions where companies are raising compensation in excess of inflation | Anonymous | Internal company data | Not readily available |
Employment data
Economic indicator | Suggested by | Possible data sources | Data availability |
---|---|---|---|
Occupational unemployment rate | Engine Advocacy IFP | CPS | Public |
Labor force non-participation | IFP | ACS | Public |
Three year lagged unemployment rate | IFP | ACS | Public |
Percentage change in employment over one year | TechNet Compete America Coalition U.S. Chamber of Commerce Ampere Computing Niskanen Center ABC BPC Combemale, Reamer, & Karsten | ACS | Public |
Forecasted occupation employment growth | BLS | Public |
Talent pipeline data
Economic indicator | Suggested by | Possible data sources | Data availability |
---|---|---|---|
PhD enrollment for STEM | Anonymous | GSS | Public |
Number of individuals sitting for required exam or certification | Anonymous American Institute of CPAs | NTEWS | Public |
Number of high school students aware of relevant field’s careers | QED-C | N/A | Not readily available |
Percentage of graduates with relevant degree working in their field | CIS AFL-CIO | NSCG | Public |
Investment in training | AFL-CIO Department for Professional Employees IFPTE AFL-CIO | N/A | Not readily available |
Required training times for newcomers | Niskanen Center | N/A | Not readily available |
Students across educational levels enrolled in degree earning programs in relevant fields of study, including the number and percentage of students who are citizens and LPRs | AILA & AIC Compete America Coalition | NSCG IPEDS | Public |
Authors of recent patents and publications | ASU | Process idea: Akcigit, Goldschlag (2022) Measuring the Characteristics and Employment Dynamics of U.S. Inventors | Not readily available |
Qualified individuals with or without an advanced degree | ASU | N/A | Not readily available |
Workers in existing career roles | ABC ASU | N/A | Not readily available |
Education and skills needed for relevant career paths | ASU | O*NET | Public |
Workforce age | AILA & AIC Ampere Computing | CPS | Public |
Number of qualified foreign applicants compared to qualified U.S. applicants | QED-C American Institute of CPAs | Internal company data | Not readily available |
Companies’ technical needs to ensure educational providers are addressing correct gaps | ASU | Internal company data | Not readily available |
Hiring, layoffs, and turnover data
Economic indicator | Suggested by | Possible data sources | Data availability |
---|---|---|---|
Retention rate | AFL-CIO Department for Professional Employees Harvard Business School Managing the Future of Work Project Economic Policy Institute AFL-CIO | N/A | Not readily available |
Rate of layoffs and precarious employment | AFL-CIO | JOLTS | Public |
Number of job postings | American Institute of CPAs Engine Advocacy IFP | JOLTS Lightcast | Public Private |
Average time to fill positions | QED-C EPI American Institute of CPAs | N/A | Not readily available |
Number of workers hired versus the number of open positions in a given time period | QED-C | JOLTS | Public |
Job-to-job transitions | ABC EPI IFP | N/A | Not readily available |
Ratio of job openings to employment (or unemployment or labor force) | TechNet AILA & AIC | N/A | Not readily available |
Trends identified through on-campus recruiting efforts, internship programs, research collaborations, and other engagements with university partners | Compete America Coalition Ampere Computing | N/A | Not readily available |
Positions that have been laid off or terminated | Anonymous | Internal company data | Not readily available |
List of quit rates from employees | Anonymous | Internal company data | Not readily available |
If the company is international, data from hiring in other countries | Anonymous | Internal company data | Not readily available |
Number of qualified applicants responding to a job posting | QED-C | Internal company data | Not readily available |
Rate at which applicants decline job offers | QED-C EPI | Internal company data | Not readily available |
Migration data
Economic indicator | Suggested by | Possible data sources | Data availability |
---|---|---|---|
Occupational categories most often pursued through current PERM process | Ampere Computing Americans for Prosperity, Cato, & Angelo Paparelli | ETA Performance Data | Public |
Projections of future international student trends based on State Dept’s student visa application data | Presidents’ Alliance on Higher Education and Immigration U.S. Chamber of Commerce | NSCG, SEVIS by the Numbers report | Public |
Migration patterns | AILA/AIC Ampere Computing | N/A | Not readily available |
Organized labor data
Economic indicator | Suggested by | Possible data sources | Data availability |
---|---|---|---|
Bargaining trends and management attitudes toward unionization | AFL-CIO | N/A | Not readily available |
Workforce diversity and intentional strategies to recruit, train, and hire women/BIPOC | AFL-CIO Department for Professional Employees | N/A | Not readily available |
Consultation with unions that have members working in relevant occupations and industries | AFL-CIO Department for Professional Employees | N/A | Not readily available |
High-Skilled Immigration Resources
Editor’s note
This resource page was established on September 25th, 2024 and last updated on September 26th, 2024.
This is a repository of resource materials about underutilized immigration pathways for high-skilled STEM professionals and the government policies governing such pathways. IFP’s high-skilled immigration team produces policy articles and white papers that are available on the team’s home page. But the team also produces other materials — letters, one-pagers, explanatory materials, and more — that we want to make publicly available.
If you’re curious about American high-skilled immigration policy, you may find some of these resources handy. Many of these resources may also be useful for international STEM experts exploring the best pathway for themselves, or for their employers, lawyers, or university.
Below, you will find:
- Correspondence with the government — Letters for government consideration concerning policy matters related to international STEM talent.
- Informational and instructional materials — Documents about the O-1A extraordinary ability visa classification, the J-1 STEM Initiative for researchers at companies, and considerations for higher education, science organizations, local consortia, and others interested in the role of high-skilled immigrants in innovation.
Please reach out to IFP’s high-skilled immigration team if you have any questions.
Correspondence with the government
- June 20, 2024. Joint letter to USCIS to update O-1A Policy Manual guidance
- June 3, 2024. Joint letter to DHS to nominate fields to the DHS STEM List
- May 13, 2024. IFP’s submission to DOL’s Request for Information regarding modernizing the Schedule A List
- December 22, 2023. IFP’s comment to USCIS’s proposed rulemaking on modernizing H-1B requirements
- December 21, 2023. Joint letter urging USCIS to avoid a specialty occupation definition that would significantly narrow eligibility for H-1Bs
- June 28, 2023. Joint letter requesting Acting Secretary of Labor Julie Su update Schedule A
- November 17, 2023. Joint letter to DOS and DHS urging extension of interview waiver authorities
- February 2, 2022. Joint letter to DHS to suggest procedural changes to international entrepreneur parole
Informational and instructional materials on high-skilled immigration
- October 2, 2024. How J-1 researchers can support U.S. companies in the STEM ecosystem. This guide introduces the J-1 STEM Research Initiative, which permits foreign-born STEM professionals engaged in R&D to reside in the United States for up to five years, and describes how these researchers can be placed at U.S. companies to conduct research activities.
- September 3, 2024. Guide to the O-1A for STEM Researchers In the National Security Innovation Base. This guide is intended to explain when researchers across the national security innovation base can qualify for the O-1A extraordinary ability visa category. The guide also lays out what the O-1A category is, the eligibility criteria, examples of achievements that would qualify, examples of what goes into successful petitions, and more.
- July 24, 2024. Biden Administration’s actions on international STEM talent. This table summarizes select actions taken by the Biden administration to policies and regulations that benefit international advanced STEM degree holders in the United States. These changes include explaining how STEM PhDs can qualify for an O-1A visa, explaining how advanced STEM degree professionals could take advantage of the National Interest Waiver when applying for an employment-based green card, modernizing and enhancing the integrity of the H-1B program, updating the Designated Degree Program List for STEM Optional Practical Training (OPT) extensions, and more.
- July 12, 2024. Explaining New Guidance on National Interest Waivers. This piece explains the new 2022 guidance concerning the NIW for immigrants holding advanced STEM degrees and lays out an updated approach to NIW petitions that immigration lawyers have found successful.
- April 22, 2024. How Institutions of Higher Education Can Help International Students and Scholars Navigate the Immigration System. This writeup explains the legal framework for STEM immigration, and provides three practical steps that institutions of higher education can take that make concrete improvements in documenting recognition and accomplishments for early career scientists.
- March 25, 2024. Opportunity for Universities to Accelerate Economic Development with the J-1 STEM Research Initiative. This document explains how universities can develop a J-1 demonstration project that places J-1 researchers at local companies through the J-1 STEM Research Initiative, which is especially useful for university-based spin-out companies.
- December 12, 2023. Using Existing International Stem Talent Policies to Advance U.S. Industrial Policy in Technology and Innovation Hubs. This document explains how new tech and innovation hubs can encourage their members — including research organizations, businesses, and universities — to explore opportunities through STEM Optional Practical Training (OPT), O-1A visas, and J-1 STEM researcher visas.
- November 6, 2023. The Role of Professional Science Societies and Scholarly Publications in Obtaining Immigration Benefits. How scholarly publications are judged by U.S. Citizenship and Immigration Services (USCIS) adjudicators, including three ways that professional science societies can help their international members strengthen their portfolios before applying for an O-1A visa.
Expelling Excellence: Exchange Visitor Restrictions on High-Skill Migrants in the United States
JEL No. F22, J24, O15, O33. The views expressed here are those of the authors alone and do not represent any organization. Clemens acknowledges support from the Peterson Institute for International Economics and from Open Philanthropy.
Abstract
We examine a little-known restriction on high-skill immigration to the United States, the Exchange Visitor Skills List. This List mandates that to become eligible for long-term status in the U.S., certain high-skill visitors must reside in their home countries for two years after participation in the Exchange Visitor Program on a J-1 visa. While well-intended to prevent draining developing nations of needed skills, today the Skills List in practice is outdated and misdirected. It is outdated because it fails to reflect modern economic research on the complex effects of skilled migration on overseas development. It is misdirected because, as we show, the stringency of the List bears an erratic and even counterproductive relationship to the development level of the targeted countries. The List is also opaque: there have been no public estimates of exactly how many high-skill visitors are subject to the list. We provide the first such estimates. Over the last decade, an average of between 35,000 and 44,000 high-skill visitors per year have been subject to the home residency requirement via the Skills List. Despite the stated purpose of the List, these restrictions fall more heavily on relatively advanced economies than on the poorest countries. We describe how a proposed revision to the List can address all three of these concerns, balancing the national interest with evidence-based support for overseas development.
Introduction
The Exchange Visitor (J-1) visa promotes mutual understanding and international cooperation by allowing the exchange of ideas in people-to-people programs across the United States in 15 categories of activity, ranging widely from camp counselors to professors. Many J-1 programs attract individuals who either are or will later be high-skill professional workers. But a half-century-old set of restrictions on these visitors, the Exchange Visitor Skills List, obliges many of these skilled visitors to be physically present in their home country for at least two years after their J-1 stay and before considering any opportunities to relocate to work and live long term in the U.S. Here we provide new, quantitative estimates of how the Exchange Visitor Skills List affects these individuals.
The J-1 visa enables, among others, students, skilled professionals, and experts, most commonly early in their career,[ref 1] to temporarily enter the United States to participate in collaborative research, obtain practical experience, and receive training in their area of expertise. This visa is commonly used by researchers, educators, healthcare professionals, and other individuals developing specialized expertise, allowing them to stay for approximately 1–5 years. The J-1 visa is for nonimmigrants, who are provided status only through the end date of their particular exchange visitor program. Moreover, according to a law enacted in 1970, once the J-1 program ends, some of these exchange visitors are mandated to return to their home country for two years before they are eligible to apply for permanent residency in the U.S. or other long-term work visas.
This “home residency requirement” is generally enforced if the J-1 visa holder’s country is designated by the U.S. State Department as “clearly requiring the services of persons engaged in the field of specialized knowledge or skill in which the alien was engaged.”[ref 2] This designation is made in the Exchange Visitor Skills List, which specifies the pairs of countries and fields that require migrants to return home.
Here we present evidence that the Skills List is outdated, misdirected, and opaque. It is outdated because it is built with a rationale and method that are half a century old and reflect ideas about high-skill migration that are no longer current. It is misdirected because it erratically targets countries independently of their level of development, and targets fields independently of whether they are “required” in the home country by any clear criterion. And it is opaque because there are no prior published estimates of the number of high-skill migrants affected by the List; we provide the first estimates. We address these shortcomings by describing proposed improvements to the way the List is built.
A policy process is currently underway to revisit the Skills List. In President Biden’s October 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the Secretary of State was instructed to consider establishing “new criteria to designate countries and skills on the Department of State’s Exchange Visitor Skills List” and to “consider publishing updates” to the current Skills List, last updated in 2009. Following this instruction, in March of 2024, the State Department submitted to the Office of Information and Regulatory Affairs a final rule on the Exchange Visitor Skills List for review.
As we describe below, the literature on skill flow and development has grown tremendously in the last thirty years. Economists and other social scientists now have a much richer understanding about the complex relationship between economic development and the movement of skilled individuals than when the Skills List was first created in the 1970s. We now know that global networks of skilled migrants are a crucial conduit for ideas, investment, trade, and technological advances for migrants’ countries of origin. We now know that the opportunity to use skills abroad has been a major engine of human capital formation across the developing world. In other words, we have uncovered a variety of ways that barriers to skilled migration can harm development at home.[ref 3]
Unfortunately, these advances in our knowledge have never been adequately reflected in published Skills Lists, despite periodic revisions. The first Skills List was published in 1972. New lists have been comprehensively revised only three times — in 1984, 1997, and 2009 — with various smaller revisions published in intervening years. While the State Department doesn’t share how the Skills List is determined, other than that it is done in “consultation with foreign governments and overseas posts,” it seems that prior updates have reflected an excessive and simplistic focus on “brain drain” that — though well-intended and thoughtful — is no longer supported by evidence.
Background on the Skills List
The development effects of skilled migration have been a key concern for several decades. The Skills List was first required by a 1970 amendment to the Immigration and Nationality Act (INA) that sought to limit the scope of the two-year home residency requirement by narrowing who it applied to and expanding the ability for migrants to get waivers.[ref 4] After section 212(e) of the INA was amended, the Skills List would be one of the remaining ways to subject exchange visitors to the requirement that they return home for two years.
The new law provided that an exchange visitor would remain subject to the two-year home residency requirement if she was from a country which the State Department had “designated as clearly requiring the services of persons engaged in the field of specialized knowledge or skill in which the alien was engaged.”[ref 5] In congressional debate about the 1970 law, Rep. Michael Feighan explained that the new idea for a Skills List was intended to identify “persons from developing countries clearly requiring the aliens[’] skills.”[ref 6] As Rep. Peter Rodino explained, “it is not reasonable to force a person to return home to an atmosphere where he cannot utilize his abilities to the fullest extent,” except when letting them stay in the United States is “not in the interest of his home country.”[ref 7]
This thinking reflects the research literature of its time. Leading up to the 1970 Act, economists had developed the concept of “human capital,” and began thinking through how it was affected by migration. In 1966, Grubel and Scott argued that “the transfer of human capital occurring when highly skilled people emigrate between countries always reduces the economic and military power of the migrant’s native country,” though they maintained the effect was probably small in the long-run because replacements can be trained.[ref 8] In 1968, Aitken published a reply pointing out an error in Grubel and Scott’s analysis (namely, that they were considering emigration of one marginal worker at a time, not emigration of large numbers of workers at once) and concluded that skilled emigration will significantly reduce income in developing countries even in the long-run. Aitken also argued that these negative effects may be even larger when the effects of economies of scale (and other positive externalities) and the opportunity cost associated with training replacements are taken into account.[ref 9] Just after enactment of the 1970 law, in an influential paper from 1974, Bhagwati and Hamada furthered this consensus with a model that suggests another cost of “brain drain” would be in unemployment, as workers overinvest in skills as a ticket to leave.[ref 10] Bhagwati recommended a tax on skilled migrants to offset what he saw as the harm they necessarily inflict on low-income countries by their decision to depart.
In short, the Skills List emerged against a backdrop of leading economists in basic agreement that the emigration of skilled workers would tend to reduce the per-capita income of people in developing countries. They disagreed among themselves about the magnitude of the negative effect, the effect it had on total social welfare (i.e., whether the benefits to the migrant might outweigh losses to their home country), and the correct policy response. Nevertheless, the consensus was that people who were left behind would be negatively affected. Regulations on skilled migration were fundamentally viewed as trading off migrants’ individual freedoms against the ostensible social harms of migration. But that position is no longer supported by mainstream economic research.
The outdated rationale for the Skills List
Starting in the late 1990s, there has been a sea change in our understanding of the effects that skill flows have on developing countries. This evolution has been called the “new economics of the brain drain.”[ref 11] The fundamental insight of this literature is that origin countries can benefit from international flows of skilled workers, including by their permanent emigration. This is because international flows of technology, entrepreneurship, trade, and investment typically flow through networks of people, networks that depend on skilled migration, and because the prospect of emigrating induces more people to invest in acquiring skills. This has brought leading development economists to speak of “brain gain” rather than “brain drain.”
In 1997, two theoretical papers were published that made an important contribution to analyzing how emigration affects development: they model how living standards are affected by emigration if human capital can have positive economic spillovers.[ref 12] The result is that when more people choose to invest in acquiring skills, this can increase growth and living standards in the source country. In the first decade of the 21st century these debates finally began receiving much-needed empirical investigation, confirming that skilled emigration often increases skills in a country of origin in the real world.[ref 13] The result has been an explosion of both theoretical and empirical research, with hundreds of articles written on the subject in the second half of that decade — twice as many as in the preceding 15 years.[ref 14]
Perhaps the strongest evidence from this empirical work has been quasi-experimental papers showing that in practice, skilled emigration has caused the formation of greater skill stocks in numerous developing countries — even net of departures. For example, Batista, Lacuesta, and Vicente find that increased migration opportunities in Cape Verde would lead to significant human capital gains.[ref 15] Chand and Clemens recently found that a surge in skilled emigration from Fiji caused enough additional skill formation there to fully offset the skills lost to departure.[ref 16] In another new and ground-breaking study, Abarcar and Theoharides find that changes in U.S. demand for nurses caused nine more nurses to be licensed in the Philippines for every one that came to the United States.[ref 17] The very possibility to emigrate raises workers’ return to investing in skill, causing them to invest in higher skill — even for those who do not end up leaving.
In addition to the “new economics of the brain drain,” there are additional mechanisms economists have identified by which skilled emigration can improve development prospects, including:
- Trade networks: Migration can create new networks that create opportunities for trade, investment, technological diffusion, and other phenomena that can benefit source countries. Dany Bahar and Hillel Rapoport, two of the world’s leading economists studying migration and development, have shown that skilled migrants are a crucial catalyst for transfers of modern technology to arrive in and spark economic growth in developing nations. Countries with larger stocks of skilled emigrants abroad are much more likely to start producing and exporting products that are common in the migrant-destination countries but that the origin countries have never produced and exported before.[ref 18] In other words, the ideas that spark economic growth don’t just travel through the ether: they travel through networks. Those networks are built by skilled emigration.
- Entrepreneurship: In granular case studies, sociologist AnnaLee Saxenian has documented how the high-tech export industries that have been so crucial to economic development in India and Taiwan got their start through global networks of highly skilled emigrants from those countries. In other words, the jobs that those industries created in the home countries were made possible by skilled emigration from those countries.[ref 19]
- Capital flows: The volume of remittances to the developing world are a major source of finance for development. Back in the 1970s, official foreign aid was several times larger than remittances. Today it is the reverse: migrants’ remittances are roughly triple the size of all official foreign aid combined. And highly skilled migrants remit more than less skilled migrants do.[ref 20] The same is true for capital flows by private enterprise: Migrant networks cause more foreign direct investment to flow to developing countries, and this effect is largest for highly skilled migrants.[ref 21] Put differently, this evidence implies that restricting international migration by skilled workers costs developing countries the very finance that they need to kickstart development.
While the theoretical possibility still exists for emigration to set back the economic development at some times in some countries of origin, the mounting empirical evidence suggests that this is more the exception than the rule.
Visitors targeted by the current Skills List
The Skills List premise is that J-1 visitors exchanging ideas and experiences with their American counterparts will return home as workers, providing services in their home countries. Thus, we discuss here “foreign workers,” even though the high-skill professionals holding J-1 visa status we are focused on are most often collaborating, researching, teaching, sharing their expertise, or learning on-the-job (and not simply “workers”).
The effect of the Skills List on foreign workers is opaque. The U.S. government does not publish estimates of the number of high-skill foreign workers in the U.S. affected by the home residency requirement. We are not aware of any prior estimates of this number from outside the government.
Here we estimate the number of high-skill professional workers covered by the Skills List in each of the last 10 years, subject to constraints of available data. Patterns in the data allow us to make such estimates with high accuracy, and high confidence that the estimates are slightly conservative. Here we describe a simple rule for assessing when a worker in the data is an Affected High-Skill Visitor: a J-1 visitor is estimated to be a “high skill” visitor affected by the Skills List when 1) the Skills List designates her field of specialization for her country of citizenship and 2) the vast majority of J-1 participants in her same field of specialization are in program categories that are both affected by the Skills List and that we identify comprise “knowledge workers.”
Novel data and definition of Affected High-Skill Visitors
We obtained a novel data extract from the Student and Exchange Visitor Information System (SEVIS), tabulating the full universe of first-time active J-1 visa recipients from FY2014–2023 by country (e.g., Senegal, Bolivia), field (e.g., Engineering, Philosophy), and J-1 program category (e.g., Professor, Au Pair). Three-way tabulation is only available to us for five major countries: China, India, Korea, Brazil, and Colombia. For all other countries, in each year, we have three separate two-way tabulations: country-by-field, country-by-category, and field-by-category.
Thus we require a method to estimate what fraction of the workers from a given country and field are Affected High-Skill Visitors, that is “high skill” and also subject to the Skills List. We define “high skill” visitors as those in fields that are “knowledge-intensive” — that is, where all or nearly all visitors are in J-1 program categories that we define as “knowledge workers.”
Knowledge workers include those whose program category requires that they have or are pursuing an undergraduate or advanced degree from a U.S. or foreign university, are regarded as specialized “experts” in a field of knowledge, or have experience in a specialized field of knowledge. While medical doctors coming to the United States to develop skills as clinical physicians are certainly knowledge workers, they are excluded from our classification of Affected High Skill Visitors because they can be distinguished in the data and because such J-1 visa recipients are generally subject to a two-year home residency requirement independent of the Skills List, under a separate provision of law.[ref 22]
Our classification of knowledge workers omits J-1 visa recipients who are au pairs, camp counselors, and high school students. It furthermore omits students on summer work travel, who are required to be students at overseas universities on their way to earning degrees — a debatable choice that tends to make our estimates conservative.
This criterion for “knowledge workers” comprises the following J-1 program categories.[ref 23] Professors, Research Scholars, and Short-Term Scholars typically hold advanced degrees and are carrying out research or university-level teaching in the United States, and can include medical doctors in non-clinical roles[ref 24] of observation, teaching, or research. Specialists are defined by the State Department as “experts in a field of specialized knowledge or skill.” Teachers hold a university degree in their field. Trainees have either a university degree or several years of experience in a specialized field of knowledge. College and university students are studying in the U.S. for an undergraduate or advanced degree, or are in the U.S. fulfilling academic requirements, sometimes as student interns, for an overseas university degree. Interns are engaged in or have recently completed a foreign university degree.
A rule for estimating Affected High-Skill Visitors
Because the exact number of professional visitors who are (or are not) considered “knowledge workers” is unobservable in the data we have for most countries within country-field pairs, we must proxy for unobservable “knowledge workers” with their fields. We classify workers as “knowledge workers” when they are in “knowledge-intensive” fields — that is, in fields where the vast majority of workers are in “knowledge worker” J-1 program categories.[ref 25] We omit workers from the “knowledge worker” classification when they are not in “knowledge-intensive” fields. When in doubt, we err on the side of excluding workers from this “knowledge workers” classification, tending, again, to make our estimates conservative.
Figure 1 describes the fields that our rule classifies as “knowledge-intensive.” Each bar shows the fraction of visitors in each field who are in fact in “knowledge worker” J-1 categories (Research Scholar, Student-Doctorate, etc.) in blue. It shows the fraction who are not in “knowledge worker” J-1 categories in red.
Figure 1: Percent of recent J-1 recipients in “knowledge worker” J-1 categories (in blue), by field of specialization, for all countries collectively
Figure 1 shows that such a rule is feasible: Fields can be an accurate and conservative proxy for worker categories.
The figure reveals that our rule is highly specific, that is, it exhibits a low false-positive rate. Across all countries, the vast majority (99%) of visitors in fields we classify as “knowledge-intensive” fields are in “knowledge worker” categories. For example, we include all workers whose field is 14, Engineering, of which 99.9% are in “knowledge worker” categories; we include all in field 22, Legal Professions and Studies, of which 99.8% are in “knowledge worker” categories. In the field where our rule has the lowest specificity (field 35, for business fields related to Interpersonal and Social Skills), 81% of workers are in “knowledge worker” categories, but this a minor field representing just 0.04% of workers.
The figure also shows that our rule is highly sensitive, that is, it exhibits a low false-negative rate. Across all countries, the large majority (84%) of workers in fields that we do not classify as “knowledge-intensive” fields are visitors who are not in “knowledge worker” categories. For example, we omit from the “knowledge worker” classification all workers whose field is 19, Family and Consumer Sciences, of which 98.7% are au pairs. We likewise omit all workers in field 26, Leisure and Recreational Activities, of which 98.6% are either camp counselors or summer work travel.
Finally, the figure shows that our rule yields slightly conservative estimates of the number of Affected High-Skill Visitors. In the lower portion of the figure, the field where our rule has the lowest sensitivity (field 52, for Business Management and Marketing), 58% of workers are in the category of summer work travel, which we do not classify as “knowledge workers.” This omits the 42% who are in fact in “knowledge worker” categories such as research scholars and undergraduate and graduate students. But such omission gives our classification the desirable trait of estimating numbers of Affected High-Skill Visitors that are conservatively low. The alternative, classifying field 52 as “knowledge-intensive,” would run the risk of overestimating the number of affected high-skill workers. Field 52 is large, while field 35 (discussed above) is very small, so the number of false negatives can be expected to outweigh the number of false positives. We thus expect that our estimates of Affected High-Skill Visitors are conservatively low.
We thus arrive at a simple rule: A J-1 exchange visitor is estimated to be an Affected High-Skill Visitor when 1) her country of citizenship and field of specialization appear on the Skills List, and 2) she is estimated to be a “knowledge worker” category worker because her field of specialization is designated as a “knowledge-intensive” field in Figure 1 (excluding clinical physicians).
For many country-field pairs there is a range of uncertainty in the application of this rule. This is because, in our dataset, fields are tabulated at the two-digit level. Some countries (such as China) specify most or all fields on the Skills List at the two-digit level, well-aligned with our data. Other countries (such as Brazil) specify fields on the Skills List at the much narrower four-digit level. For example, for China all fields under two-digit code 14 (Engineering) appear on the Skills List, but for Brazil the four-digit subfield 14.19 (Mechanical Engineering) appears on the list, while 14.20 (Metallurgical Engineering) does not. Thus a worker from Brazil in field 14 may or may not be subject to the Skills List. We address this, for workers in the two-digit fields where only some of the four-digit fields they contain appear on the Skills List, by placing an upper bound (assuming that all such workers have an unobserved four-digit field that appears on the Skills List), and a lower bound (assuming that none of those workers have an unobserved four-digit field that appears on the Skills List). The true value must lie somewhere in between. This does not tend to produce a great deal of uncertainty in the overall totals. For most countries, either the entire two-digit field appears on the Skills List or none of it does.
Testing the estimation rule
Figure 2 tests whether our rule is accurate and conservative for the five countries where the underlying truth is known. We have a full country-level tabulation of fields and J-1 program categories for China, India, South Korea, Colombia, and Brazil. The figure shows the actual number of Affected High-Skill Workers from each country in each fiscal year (in blue), and compares this to the estimates made by our rule (in red). The graphs show shaded bands, not lines, to indicate the range between the upper bound and lower bound explained in the preceding paragraph.
Figure 2: Comparison of estimated number of Affected High-Skill Visitors versus the actual values, where known
In Figure 2, both our estimates and the actual values omit Alien Physicians, because their home residency requirement is almost always determined independently of the Skills List. Two other small categories of visitor — International Visitor and Government Visitor — have a home residency requirement determined independently of the Skills List because they are government-funded, but our method cannot omit them from the estimates because they are widely dispersed across different fields. But the actual values for these five countries in Figure 2 do omit International Visitors and Government Visitors. So Figure 2 serves as a check on whether the inclusion of this small group tends to meaningfully inflate our estimates of the number of people subject to the home residency requirement. It does not.
Figure 2 shows our estimation rule generates accurate and generally conservative figures for a variety of important countries. For China, the estimate is exact for the past four years; it is slightly conservative in earlier years. For India, the estimates are likewise highly accurate, especially in the last four years, and conservative as expected over the last eight years. For Korea, the estimates are so accurate that the two bands almost perfectly overlap in all years — though the estimates by our rule are slightly conservative. The range of uncertainty is greater for Colombia, where the Skills List specifies many fields at the four-digit level, but there is still close overlap of the true figures with the estimates by our rule in all years. For Brazil, where again there is a relatively wide range of uncertainty, the red and blue bands overlap so perfectly that they are difficult to distinguish. This evidence supports the view from Figure 1 that our rule is highly specific and sensitive, and generates estimates that are slightly conservative.
Estimated impact of the current Skills List, and a proposed revision
We can then apply this rule across all countries to arrive at estimates of the total number of high-skill visitors affected by the current Skills List in each year, excluding clinical physicians. Table 1 displays these totals, with fiscal year in the first column. The second column shows the number of first-time J-1 visa records, for all countries, fields, and worker categories. The third column shows the number of those J-1 visa records that are in “knowledge worker” categories (e.g. including Research Scholars, omitting camp counselors), across all countries and fields. The fourth column shows the lower bound on our estimates of the number of those Affected High-Skill Visitors, while the fifth column gives the upper bound.
The estimates in Table 1 imply that, of the 1.02 million high-skill workers who came to the U.S. on J-1 visas in the past decade, between 35.4% (352,182) and 43.9% (437,295) were subject to the two-year home residency requirement imposed by the Skills List. That is, the true number of high-skill visitors covered by the Skills List in the average year, roughly but conservatively, lies between 35,000 and 44,000.
Table 1: Estimated total number of high-skill visitors affected by the current Skills List and a proposed new Skills List
Fiscal year | Total J-1 | High-skill J-1 | High-skill and affected by current Skills List | High-skill and affected by proposed new Skills List | ||
---|---|---|---|---|---|---|
Lower bound | Upper bound | Lower bound | Upper bound | |||
2014 | 314,051 | 131,390 | 43,454 | 61,645 | 3,990 | 3,996 |
2015 | 317,495 | 125,889 | 43,983 | 54,879 | 3,917 | 3,922 |
2016 | 322,771 | 120,818 | 44,529 | 51,348 | 3,872 | 3,877 |
2017 | 327,768 | 120,039 | 45,005 | 52,349 | 4,040 | 4,047 |
2018 | 328,305 | 119,516 | 45,448 | 53,662 | 4,152 | 4,154 |
2019 | 335,297 | 119,078 | 45,468 | 53,900 | 4,273 | 4,294 |
2020 | 94,529 | 43,874 | 16,541 | 19,809 | 1,308 | 1,310 |
2021 | 114,367 | 38,650 | 12,058 | 15,885 | 3,293 | 3,293 |
2022 | 264,251 | 84,964 | 25,388 | 33,732 | 6,481 | 6,482 |
2023 | 300,112 | 92,022 | 30,308 | 40,086 | 8,195 | 8,199 |
Total (2014–2023) | 2,718,946 | 996,240 | 352,182 | 437,295 | 43,521 | 43,574 |
How would these estimates differ if the Skills List were reformed? Here we consider the effects of the revised Skills List proposed by Michael Clemens and William Kerr.[ref 26]
In the Clemens-Kerr proposal, a simple algorithm determines whether each country-field pair is assigned to the Skills List. Each country is initially assigned one of three groups of fields based on its level of development: low-income countries (e.g., Afghanistan, Malawi) are given a Broad list of fields, lower-middle income countries (e.g., India, Morocco) receive a Narrow list of fields, and upper-middle income countries (e.g., China, Brazil) receive an even smaller Minimal list. The Broad list includes direct service providers in health and education, as well as specialists in engineering, infrastructure, and agriculture. The Narrow list focuses on health and education workers, while the Minimal list includes only the most specialized health workers. Each country’s initial classification is then adjusted by four criteria considering its special circumstances: countries that are especially small or exhibit especially high rates of skilled emigration are assigned a field list one step broader than their initial classification, to reflect the special challenges of those countries. Those with especially small skilled diasporas, and especially strong systems of tertiary education, are assigned to a field list one step narrower, to reflect the higher marginal benefit and lower marginal cost of skilled emigration for those countries.
The consequences of the Clemens-Kerr proposal for the overall impact of the Skills List are estimated in the final columns of Table 1. In FY2023, the number of high-skill workers affected by the proposed Skills List would be a little less than one fourth of the number affected by the current List. This is primarily because, for most countries that appear on the current Skills List, all or almost all fields of specialization appear on the List. In other words, workers from most countries that appear on the current Skills List are very broadly affected by the List. For most countries either both specialists in medical care (field 51) and library science (field 25) are deemed “clearly required” for development of the home country, or neither are. The Clemens-Kerr proposal selectively targets specific skills that are most clearly required uniformly across all countries at similar stages of development while accounting for their special circumstances.
Figure 3 compares the overall impact of the current Skills List (in red, Table 1 cols. 4 & 5) to the impact of the proposed new Skills List (in blue, Table 1 cols. 5 & 6). Of course, the precise impact of the current or a future Skills List cannot be quantified without knowing, for each country and 2-digit field of exchange program activity, how many J-1 visa holders are sponsored with home country or U.S. government funding — the only other broad basis for the home residency requirement regardless of field. This information is not publicly available, but it is understood that such government funding focuses on a few of the smallest J-1 program categories (e.g., International Visitors) or a few government funding schemes for relatively few individuals (e.g., Fulbright).[ref 27] And in Figure 2 above, we show that our estimates remain conservative for five key countries despite their inclusion of a small group of government-funded International Visitors and Government Visitors. This is because other choices, particularly the omission of all visitors in field 52 (Business, Management, and Marketing) create a dominant tendency for the estimates to be conservative.
Figure 3: The number of high-skill workers affected by the Skills List in its current form (red) and a proposed revision of the Skills List (blue)
Thus, we are able to suggest that adopting the Clemens-Kerr proposal would reduce the number of Affected High-Skill Workers by about 90%. Meanwhile, the counterweight of a home residency requirement based on government funding would be untouched by a revised Skills List.
In general, Figure 3 shows the proposed new Skills List is less restrictive. But the number of high-skill workers affected by the proposed Skills List would have been generally rising over the past decade, if the proposed List had been applied in those years, whereas the number affected by the current Skills List fell sharply in the COVID-19 crisis and has not yet achieved its prior levels.[ref 28]
Countries targeted by the current Skills List
The current Skills List, like its predecessors, was primarily written based on requests from foreign governments. The result is a List that is erratic and arbitrary with respect to the level of development of migrants’ home countries. Figure 4a plots our estimates of the fraction of all knowledge workers from each country whose field appears on the current Skills List. When a country’s estimate has no range of uncertainty, the country appears as a single dot. When it does have a range of uncertainty, the country appears as two dots connected by a vertical line, where the upper and lower dots respectively indicate the upper and lower bounds. The red band in the middle of the figure shows a moving average of the fraction, at different levels of development. The upper line of that band shows the average upper-bound estimate at each level of development, and the lower line of the band shows the coverage lower-bound estimate. As before, clinical physicians are excluded.
Two features of the current Skills List stand out in Figure 4a. First, it is arbitrary. Although the List is mandated by law to restrict workers “clearly required” for development, the average fraction of knowledge workers affected by the List rises with the country’s level of development, up to an average income of roughly $10,000 per person per year (measured in purchasing power at U.S. prices or PPP), about the level of India or Morocco. Second, the Skills List is erratic. Countries at very similar levels of development might restrict almost all fields, or almost none, seemingly at random. Mali restricts almost all fields while The Gambia restricts none, at the same average income. Bolivia restricts almost all fields while Jordan restricts none, at the same average income. And so on. In very few countries is the fraction of skilled workers selectively targeted — that is, anything other than almost all or almost none. Even in those more selective countries, there is no clear trend toward a lower fraction in more developed countries.
Figure 4b shows the consequences of the new Skills List proposed by Clemens and Kerr. There are no substantial ranges of uncertainty, because that proposal defines fields of specialization at the two-digit level. In the poorest countries toward the left of the graph, the proposed List is less restrictive but not radically different from the current List. Under the proposed List about 25–30% of high-skill workers from the poorest countries are restricted, compared to 40–45% under the current, less selective List.
Figure 4a: The current Skills List, fraction of all high-skill workers affected, versus country’s level of development
Figure 4b: Proposed new Skills List, fraction of all high-skill workers affected, versus country’s level of development
In the proposed list, Figure 4b shows that there is much less variation in the fraction restricted at a given level of development. In the proposed list there is systematic and transparent allowance for countries’ unique circumstances: El Salvador, for example, has a high fraction of workers restricted relative to its income level due to its large rate of prior skilled migration and relatively weak skill stock at home; Guyana has a high fraction restricted relative to its income level for the same reason, plus the additional factor of its small population. Eswatini and the Solomon Islands have relatively low fractions of high-skill workers restricted relative to their income levels because even though they are small, they are underrepresented: Neither has yet had the opportunity to establish a sizeable community of skilled workers in the United States that can facilitate global linkages of trade, investment, and technology transfer to the home country.
The proposed new Skills List contains only limited restrictions for some of the most developed countries — with the exception of small countries — as Figure 4b makes clear. Consider the five major countries in Figure 2, for example. In FY2023, our conservative estimate is that the Skills List affected between 17,805 and 22,021 high-skill category workers from those five countries collectively — more than half of the total affected across all countries. Under the Clemens-Kerr proposal for the Skills List, this would have been just 762 workers, many of them non-physician health workers such as nurse practitioners and pharmacists from India. On the current Skills List, all engineers of all subfields are restricted by the home residency requirement; on the proposed new List, no engineers are. This reflects the current of modern social science, discussed above, which has documented the crucial role of Indian diaspora engineers in the cultivation of high-tech industry at home in India via ties of trade, investment, training, institutional partnerships, and technology transfer. Figure 4b also shows that restrictions under the proposed List are much more extensive for the countries where development is much less advanced than the five countries highlighted in Figure 2.
Conclusion
This paper presents new evidence on the impact and shortcomings of the current Exchange Visitor Skills List, and underlines the critical importance of updating the List to reflect contemporary understandings of skilled migration. It highlights that the current list, based on outdated notions of “brain drain,” often counterproductively restricts the flow of talent that is crucial for both the United States and the migrants’ countries of origin. Skilled migrants contribute significantly to economic growth and innovation in migrants’ home countries, via the creation of international networks that channel trade, investment, and ideas central to overseas development. By revising the Skills List to align with modern economic insights, the U.S. can better support global development goals while enhancing its own technological and economic leadership. This revision is not only a matter of justice for individual migrants but also a strategic imperative for fostering international cooperation and shared prosperity.
It is no accident that the U.S. Department of State’s Undersecretary for Public Diplomacy, through the Bureau of Educational and Cultural Affairs, administers the J-1 exchange visitor program. Recognizing that furthering a country’s national interests includes broadening dialogue between a country’s own citizens, institutions, businesses, and communities and their counterparts abroad is the essence of public diplomacy.[ref 29] When J-1 researchers, interns, trainees, professors, and others exchange ideas around science, technology and engineering innovation, for example, it is a perfect example of engaging in public diplomacy – and American soft power. As the iconic political scientist Joseph Nye observed, successful states need both hard and soft power: the capability to coerce others but also the capacity and commitment to shape others’ long-term attitudes and preferences.[ref 30]
Shaping attitudes and preferences across the globe is perhaps nowhere more vital than in an era in which technology competition, and the use of technology for good, has outsized relevance. And, once individuals’ preferences and attitudes are shaped through J-1 program participation, our law, while not necessarily providing an avenue to remain in the United States, allows complete freedom for J-1 exchange visitors to follow their high-skill journey wherever it takes them, including ultimately obtaining permanent residency here, except when their skills are “clearly required” by their home country. American commitment to public diplomacy is not diluted by better accounting for the economic realities of international skill flows. The State Department’s approach to designating countries and skills on the Skills List must be updated to account for the recent revolution in economists’ understanding of the relationship between skilled migration and development.
Meeting U.S. Defense Science and Engineering Workforce Needs
This working paper is forthcoming in “Entrepreneurship and Innovation Policy and the Economy, volume 4, Chicago: University of Chicago Press,” last updated August 20, 2024.
Abstract
Recent years have seen growing recognition of the deep reliance of the U.S. national security innovation base on foreign national advanced degree holders in the fields of science, technology, engineering, and mathematics (STEM). This recognition has led to a number of executive and legislative branch efforts aimed at attracting and securing highly skilled foreign-born STEM advanced degree holders to the U.S., as a potential path forward for meeting the science and engineering workforce needs of the U.S. defense sector, and its associated innovation base. This paper describes the policy context for this shift, and highlights ongoing needs for improved data and research that we see as critical for informing evidence-based policy debates in the coming years.[ref 1]
Introduction
Over the past several decades, economic research has shed light on many aspects of the economics of immigration. Led by Chiswick (1978), economists have analyzed how the length of time in the U.S. – often referred to as assimilation – affects the earnings of migrants to the U.S. Building on Borjas (1987)’s classic application of the Roy model, economists have analyzed the role of self-selection in which individuals migrate across countries. Work by Card (1990) and others has sought to provide rigorous evidence on how immigrants affect the wages and employment of natives. Economists have also directly studied several immigration policies, such as the H-1B visa lottery (Doran et al., 2022; Mahajan et al., 2024). Many economists are drawn to work on the economics of immigration out of a desire to generate rigorous, policy-relevant evidence that can inform both policymakers and the public about how changes to immigration policies affect the number and characteristics of foreign nationals allowed to enter the U.S., and on the economic impacts of those changes.
To be clear, this past literature has generated a number of important facts and insights. However, in recent years the key policy efforts aimed at changing the number and characteristics of foreign nationals allowed to enter the U.S. have been raised not in the context of immigration policy discussions, but rather have been articulated by the national security community as potential pathways for meeting the science and engineering workforce needs of the U.S. defense sector. Most of the economists with relevant expertise – in the economics of immigration – seem to be largely unaware of these national security-related efforts, presumably in part because economists have generally played less of a role in national defense and national security policy discussions. As a result, economics research has largely failed to keep pace with producing the types of facts and evidence that are needed to lay the groundwork for informed policy decisions in this area. The goal of this paper is to provide some context for this recent set of executive and legislative branch efforts, and to highlight specific examples of topics where additional research by economists would be valuable for informing more evidence-based policy discussions in the coming years.
It would be remiss not to mention that while national defense and national security have long been core policy objectives for politicians across the political spectrum, the economics of defense and national security are topics that have generally been neglected by economists relative to the policy attention they receive. From a public finance perspective, national security can be conceived of as an investment in a public good designed to reduce the likelihood of large-scale societal losses. Congressional interest in national security as a policy objective can be illustrated concretely with data on budgetary outlays, with Congress appropriating hundreds of billions of dollars annually. When economists such as the late Harvard economist Martin Feldstein have encouraged economics PhD students to pursue research on the economics of national security, they have generally guided students towards researching topics such as military compensation, analysis and prediction of armed conflicts, and terrorism.[ref 2] While such topics are obviously quite important, the topic of focus here – namely, the heavy reliance of the U.S. national security innovation base on foreign national STEM advanced degree holders – has thus far not been a focus of researchers working on the economics of national security.[ref 3]
Policy context
It was evident the national security policy discussion was connected to innovation and international talent at least by the time the White House National Science and Technology Council (NSTC) released its report on A 21st Century Science, Technology, and Innovation Strategy for America’s National Security (National Science and Technology Council, 2016a). This NSTC report argued: “…the institutions that contribute to the national security science, technology, and innovation infrastructure should be, wherever possible, able to draw on the world’s best and brightest minds regardless of citizenship” (at p.12) and that “sensible immigration policies, including for skilled immigrants in specialty technical areas, particularly for those educated in U.S. universities, must continue to be a goal” (at p.14). Notably, the discussion was largely framed in terms of workforce dynamics. For example, later that same year another NSTC report (National Science and Technology Council, 2016b), this one focused on strategic planning on artificial intelligence R&D, argued that “while no official AI workforce data currently exist, numerous recent reports from the commercial and academic sectors are indicating an increased shortage of available experts in AI… Additional studies are needed to better understand the current and future national workforce needs for AI R&D.”
A few years later, the Center for Security and Emerging Technology (CSET) was founded at Georgetown University which – among other topics – would take the lead on both original research and synthesis of existing data on this topic. For example, CSET’s 2019 report Immigration Policy and the U.S. AI Sector (Arnold, 2019) quantified the importance of immigrant talent to the AI industry and argued that U.S. immigration policies were lagging behind policies of peer countries in the race for talent. In testifying at a hearing on AI and the workforce (House Budget Committee, 2020), the founding director of the Center for Security and Emerging Technology accentuated the necessity of inserting immigration into the discussion by explaining that: “We should ensure that we remain an attractive destination for global talent by broadening and accelerating the pathways to permanent residency for scientists and engineers.” (at p.58).
Later that year, the Future of Defense Task Force (2020) of the House Armed Services Committee connected these threads by recommending the U.S. invest in domestic STEM primary education; attract and retain foreign STEM talent, including supporting H.R. 7256 (116th, National Security Innovation Pathway Act), discussed below; and improve federal government hiring for STEM talent including at the Pentagon. When the National Security Commission on Artificial Intelligence issued its final report the following year, both immigration and workforce recommendations were extensively featured (The National Security Commission on Artificial Intelligence, 2021). The AI Commission directed numerous recommendations on the necessity of cultivating more domestic talent, discussing the needs of U.S. markets as well as those of the national security enterprise. Moreover, the Commission argued that immigration reform is a national security imperative, associating the value of attracting and retaining highly skilled individuals to gaining strategic and economic advantages over competitors. As the President’s National Security Advisor remarked at the AI Commission’s global emerging technology summit, “We have to [ensure] it’s easier for America to be the destination of choice for the best and brightest scientists and technologists around the world.” (White House, 2021)
Artificial intelligence is of course just one of many strategically significant industries. O’Brien & Ozimek (2024) spell out the inherent reasoning animating the connection points between talent, innovation, and economic competitiveness in a range of sectors: strategic industries are increasingly reliant on highly skilled workers (share with a graduate degree grew from 12.4% to 19.6% since 2000), and foreign-born workers account for a disproportionate and increasing share of highly-skilled workers in strategic industries (growing from 26% to 36% since 2000). India and China are the largest source countries for skilled foreign-born professionals in strategic industries in the U.S., comprising over 40% of college-educated workers, despite facing the tightest country-specific caps on employment-based green cards. Overall, despite representing 14% of the U.S. population, foreign-born experts comprise 37% of the workforce with advanced STEM degrees for DoD-funded projects (Miles et al, forthcoming). Moreover, many more advanced STEM degree immigrants are engaged in broader U.S.-based research and development initiatives advancing U.S. technological development beyond those directly funded by the DoD, in scientific development and engineering services generally, and in many specific industries – including electronics manufacturing, space research, and aerospace and aircraft manufacturing.
Figure 1: Reliance on foreign-born STEM talent, defense-related industries versus other industries
This replicates an unnumbered figure from Neufeld (2022) using IPUMS [American Community Survey] data. STEM fields are matched to the DHS STEM Designated Degree Program List:11,13,20,21,24,25, 36,37,38,50,51,52,55,59,61. For defense industries, following the Vital Signs report (2020) we use the following industries as defense industrial base industries (also called defense-related industries): selected durable industrial goods manufacturing: (NAICS codes: 325M, 3252, 3255, 326, 327, 331, 332, 333, 335, 336), selected information and communication technologies (NAICS 334, 5112, 517, 518, 5415) and scientific research and development (NAICS 5417). The classification of DHS STEM Designated Degree Program List is likely not a perfect match with Neufeld (2022), but the graph closely resembles Neufeld’s figure.
Figure 2: Reliance on foreign-born STEM talent, by sector
From Miles et al, forthcoming.
These realities have led many national security experts to conclude that congressional action is needed to encourage additional foreign national STEM advanced degree holders to be admitted to the U.S., as evidenced by the concerns expressed by over 45 national security experts and officials from both Democratic and Republican administrations in a letter in the last (117th) congress to congressional leadership (Snyder & Allen-Ebrahimian, 2022).
This focus on critical and emerging technologies (National Science and Technology Council, 2024) in DoD-funded activities is more broadly consistent with a changing target of federal support for the national defense, which incorporates innovation and economic competitiveness. Instead of defense funding to support DoD narrowly, there is a movement toward a much broader conceptualization of the U.S. national security innovation base. As described by the Congressional Research Service (2023), during the first 150 years of its history, the United States devoted relatively few resources to the management and maintenance of a permanent defense industrial base. Dating back roughly to America’s entry into World War II, the concept of a defense industrial base – generally used to refer to a broad set of organizations that supply the U.S. government, primarily but not exclusively the U.S. Department of Defense (DOD), with materials and services for defense purposes – has featured much more centrally in national security and national defense policy discussions. In recent years, the policy emphasis has shifted further towards what is often described as the National Security Innovation Base. To reference one definition from the Ronald Reagan Presidential Foundation & Institute, the National Security Innovation Base is defined as a broad array of actors including various research centers and laboratories, universities and academia, venture capital, and the innovative systems of American allies and partners – noting, “In order to sustain America’s competitive advantage and to achieve its national security objectives, the common purpose and coordinated efforts of these key stakeholders are vital.” The 2022 National Defense Strategy (US Department of Defense, 2022) is one recent federal agency document echoing this focus, noting “…we will act urgently to build enduring advantages across the defense ecosystem – the Department of Defense, the defense industrial base, and the array of private sector and academic enterprises that create and sharpen the Joint Force’s technological edge.”
Thus, industries producing goods or services critical to national defense are often the leading examples, but from a policy perspective, these are frequently addressed together with goods such as semiconductors that are also strategically significant – for a variety of reasons including supply chain dynamics (Hunt & Zwetsloot, 2020). Neufeld (2022) argues that looking at strategic technology sectors, the workforce share with advanced STEM degrees is often quite high: around 50% of the workforce at Taiwan Semiconductor Manufacturing Company (Taiwan Semiconductor Manufacturing Company, 2020), and around 70% for quantum computing (Kaur & Venegas-Gomez, 2022) as well as machine learning and data science (Kaggle, 2019).
The shift in emphasis toward strategic innovation and economic competitiveness has resonated in particular in recent thinking about China, including a focus on the talent nexus. Zwetsloot et al. (2021) is one recent analysis comparing the STEM PhD pipelines of the United States and China. Figure 1 documents that since around 2005, China has consistently produced more STEM PhDs than the U.S., with a gap that has widened – and, based on current enrollment patterns, is projected to continue to widen – over time. If international students are excluded from the U.S. count, Chinese STEM PhD graduates would outnumber their U.S. counterparts more than three-to-one.
Figure 3: China projected to nearly double U.S. STEM PhD graduates by 2025
This replicates Figure 1 of Zwetsloot et al. (2021), for which the underlying data is the National Center for Education Statistics’ Integrated Postsecondary Education Data System (IPEDS) for U.S. data and Ministry of Education for Chinese data (see their Appendix A for details). The U.S. (Domestic) series aims to remove international students from the US (Total) series.
As the House China Task Force (2020) found in its report, the data show that in the near and medium term, the U.S. will remain reliant on foreign talent and thus the U.S. must compete in the global race for talent and both attract and retain the best and brightest immigrant minds to contribute to the U.S. economy and drive U.S. productivity. A December 2023 report from the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party – Reset, Prevent, Build: A Strategy to Win America’s Economic Competition with the Chinese Communist Party (US House of Representatives, 2023) – describes 150 policy recommendations to embrace “the clear reality that our current economic relationship with the People’s Republic of China needs to be reset in order to serve the economic and national security interests of the United States.” The report’s recommended investments in technological leadership and economic resilience center on addressing concerns that the U.S. is “falling behind in the race for leadership in certain critical technologies,” and that China is “gaining on the U.S. in the race for global talent.” The bipartisan recommendations also highlight that screening and vetting concerns need to be applied in ways that allow the U.S. to make progress with partners on collaborative efforts on critical and emerging technologies and should include a work authorization program for STEM experts from such allied nations (US House of Representatives, 2023).
Through both Republican and Democratic control of the presidency and houses of Congress, the issue is now joined, and is understood to require a whole-of-government approach (Future of Defense Task Force (2020); Senate Armed Services Committee (2020)).
Executive and legislative branch efforts
Regulatory and statutory framework for attracting and retaining foreign national STEM advanced degree holders
The U.S. immigration framework for selecting immigrants based on their skills, education, talents and future employment contributions to the U.S. continues to be based on the construct of the original Immigration and Nationality Act (1952), last comprehensively updated in 1990 Act amendments to the INA when most of the present numerical limits were adopted (Bier, 2023). In this same era, Congress took note in 1950, when it established the National Science Foundation (NSF), that the science and engineering workforce was key to U.S. interests in fostering innovation, economic competitiveness, and national security (National Science Board, 2015), but this did not come with a companion expectation that such vital U.S. interests required the nation’s immigration rules and statutes to systematically provide access to foreign national STEM advanced degree holders who wanted to become Americans. Likewise, the 1990 Act amendments, including numerical caps, were developed before the STEM acronym became a standard reference at NSF, when the nation’s population was three-quarters of its current size, and when the real GDP of the U.S. economy was half its current size. And, the last pre-pandemic year of data show that in the 30 years following the 1990 Act the number of international students earning degrees at U.S. institutions of higher education increased over 300%, about half in STEM (Congressional Research Service, 2019). Layering in today’s national security imperatives to retain foreign-born scientists, technologists, and engineers, the outdated regulatory and statutory immigration framework creates challenges.
Recent government actions
Awareness of the vital role international STEM talent plays in driving interconnected economic and national security has resulted in both the executive and legislative branches making recent efforts. Over the last three years the executive branch has sought to improve use of existing agency authorities that relate to international STEM talent, and the prior two congresses have seen the development of legislative language specifically targeting more access to lawful permanent resident status (green card status) for foreign-born STEM advanced degree holders.
Most recently, departments and agencies have adopted an approach of announcing policy guidance (White House, 2022) to explain which international STEM experts qualify for status and U.S. employment under existing binding regulations (Rampell, 2022). As summarized in Table 1 the approach has been used for STEM OPT (the post-completion optional practical training program for STEM graduates of US universities), O-1A status (the visa category for noncitizens with extraordinary ability), classification as an Employment-Based Second Preference immigrant based on National Interest Waiver (NIW), and the J-1 Early Career STEM Research Initiative (exchange visitors at companies instead of just on campuses pursuing scholarly research).
STEM OPT guidance from DHS on degree list updates. Optional Practical Training (OPT) by default allows up to 12 months of employment in the U.S. post-graduation, and STEM graduates are eligible for a 24 month extension (so 36 months total). An annual nominations process allows DHS to keep the STEM OPT extension-qualifying degree list (called the Designated Degree Program List) up to date over time.
O-1A guidance from DHS. O-1A nonimmigrant status is available to people with “extraordinary ability” measured by achievements in science, business, education, or athletics. In January 2022, U.S. Citizenship and Immigration Services (2022a) provided guidance on O-1A eligibility including clarifications and examples for STEM PhD graduates. Importantly, O-1A visa disbursement is uncapped, but tabulations of foreign-born STEM PhD graduates against O-1A take-up suggests this pathway is underutilized.
National Interest Waiver (NIW) guidance from DHS. The NIW EB2 advanced degree immigrant category allows certain highly qualified people to self-petition for a green card. In January 2022, U.S. Citizenship and Immigration Services (2022b) provided guidance on how STEM Master’s or PhD graduates may qualify for eligibility based on the merit of their work and relevance of their work to national interests (e.g.,if they are poised to make contributions in a critical or emerging technology field).
J-1 researcher guidance from DOS. The J-1 exchange visitor program authorizes people to – among other things – study, teach, research, or intern in the U.S. (described by U.S. Citizenship and Immigration Services [2023]). The US Department of State (2022)-led Early Career STEM Research Initiative connects sponsoring firms and research organizations with J-1 visa holders who seek STEM research experience with industry. J-1 researchers are expected to return home after visa expiration (5-year maximum) and are often subject to the 2-year home residency requirement.
H-1B research cap exemptions regulation from DHS. H-1B cap-exempt employers include institutions of higher education, non-profit entities affiliated with institutions of higher education, and non-profit research or governmental research organizations. The rulemaking required by Executive Order of October 30 (2023) includes clarification on cap-exemption for non-profits where research is a central focus and for employees at for-profit firms that collaborate with university-based or non-profit research organizations.
J-1 exchange visitor skills list regulation from DOS. The Exchange Visitor Skills List details the “specialized knowledge and skills that are deemed necessary for the development of an exchange visitor’s home country” (US Department of State, 2009). Executive Order of October 30 (2023) requires the Department of State to consider criteria to update countries and skills on the Skills List, as it relates to the 2-year home residency requirement. A revised list has the potential to broaden the scope and quantity of exchange visitors to the U.S., especially in STEM fields critical to the U.S.
Schedule A regulation from DOL. Schedule A is a designation for employment-based entry to those working in fields – parameterized by the Department of Labor – as lacking “sufficient U.S. workers who are able, willing, qualified, and available pursuant to regulation” (US Citizenship and Immigration Services, 2024a). Executive Order of October 30 (2023) requires the Labor Department to publish a Request for Information to identify AI and other STEM occupations as qualified for Schedule A designation.
While very little formal research has been conducted on these pathways, in some cases descriptive data makes clear that policy shifts at the agency level of this sort can matter. For example, use of the O-1A classification for experts involved in STEM activities increased by 33% in the two years following new policy guidance on how STEM PhDs can use the category (Mervis, 2023).
Moreover, U.S. Citizenship and Immigration Services (2024b) data suggests that providing policy guidance about NIW for EB2 advanced degree holders has led more immigrants to use an employment-based green card category that allows timely, more certain adjudications that is self-petitioning, such that about 9% of petitions for advanced degree holders in STEM used NIW before the policy guidance and now about 37% of such petitions for advanced STEM degree holders utilize NIW.
- FY2019 (last pre-pandemic year’s data as DHS was developing NIW guidance in 2021): 5,600 NIW approvals for STEM experts out of 77,550 total EB2 petition approvals of which 59,950 in total were for professionals engaged in STEM activities.
- FY2023 (first full year following policy announcement in January 2022): 21,240 NIW approvals for STEM experts out of 81,380 total EB2 petition approvals of which 57,150 in total were for professionals engaged in STEM activities.
To further agency efforts in this vein, the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order of October 30, 2023) instructs departments and agencies explore further avenues, some of which are summarized in Table 1, to facilitate the attraction and retention of foreign-born STEM experts, including by notice and comment rulemaking.[ref 4]
Actions by the executive branch thus appear – at least in the aggregate – to have the potential to generate consequential improvements in the ability of the United States to attract and retain international STEM talent. However, 85% of high-skilled immigrants working on DoD projects are naturalized citizens (Miles et al, forthcoming), reflecting the fact that security clearances render the feeble availability of green cards a major constraint in DoD’s ability to expand recruitment of the foreign-born talent already in the United States. Only the legislative branch can establish a new category of lawful permanent residents selected for their advanced STEM expertise that contributes to critical and emerging technology fields, and allocate numbers for such new green cards that then lead to naturalization eligibility.
A bipartisan group of 70 national security experts and officials made these points in a May 2023 letter (Snyder & Cai, 2023) to the House Select Committee on Strategic Competition between the United States and the Chinese Communist Party imploring congressional action on international STEM talent because when America attracts the world’s best and brightest many “will be working in Pentagon-identified critical technology areas.” It seems the annual National Defense Authorization Act (NDAA) is a likely place to consider statutory changes targeting the relationship between STEM experts and security, as does legislation focused on industrial policy on critical industries key to international technology competition, such as CHIPS and Science legislation, as these are bipartisan efforts squarely focused on the nation’s security priorities.
The animating logic tree behind this line of legislative efforts, summarized in Table 2, focusing on STEM green cards, is that:
- First, technology and innovation is at the heart of strategic competition and the United States can win the competition only if we can reliably tap into the global supply of STEM talent, and
- Second, the most effective way to attract the global talent America needs is to remove green card caps for some segment of advanced STEM R&D talent most likely to make important contributions for the United States in the technology sectors that matter most to our national security.
The 117th Congress featured some of these efforts. Table 2 reflects that in summer 2022 there was an effort around adding a generous provision to the NDAA for FY23 that would provide STEM green cards without numerical limit to certain foreign-born STEM PhDs earning doctorates from research-intensive universities in fields relevant to critical industries or critical and emerging technologies (Gilmer, 2022b). The Advanced STEM Degrees NDAA amendment driven by Rep. Lofgren (D-CA) was based on Section 80303 of the House-passed America COMPETES Act and garnered bipartisan support, but ultimately was not ruled in order for a House vote (Gilmer, 2022a). Similarly, Section 80303 in the House’s America COMPETES Act allowed for both STEM Master’s and PhDs from research-intensive universities both in the U.S. and abroad to secure green card status in certain situations. While Section 80303 passed the House in February 2022, it was not taken up in the Senate or in the conference that led to the CHIPS and Science Act enacted in August 2022 (Anderson, 2022).
H.R. 7256. A bill proposed to develop a special immigrant visa for individuals employed by a U.S. firm or academic institution engaged in national security efforts that protect and promote the U.S. innovation base, conduct research funded by the Department of Defense, or who have technical expertise in a domain pursuant to National Defense Strategy or the National Defense Science and Technology Strategy. The plan imposes a cap of 100 principals in fiscal year 2021, increasing by 100 annually until fiscal year 2025 and remains at 500 principals thereafter. There was no legislative action on the bill (H.R. 7256, 2020).
H.R. 4350 and H.R. 6395. Two amendments proposed to the NDAA for fiscal years 2021 and 2022 would allow the Department of Defense to develop a competitive process to identify individuals “essential” to advancing technologies critical to national security. In practice, this would be implemented as a special immigrant visa for individuals working on university-based research funded by the Department of Defense or individuals possessing specific scientific or technical expertise. The plan imposes a cap of 10 principals in its first fiscal year, increasing by 10 annually until its tenth fiscal year, and remains at 100 principals thereafter. Despite passing in the House, the bill was ultimately dropped in Conference before the enactment of the NDAA (H.R. 4350, 2021; H.R. 6395, 2020).
H.R. 4521. Section 80303 of what became the House version of the CHIPS bill proposed to exempt foreign-born, STEM Master’s or PhD graduates from select U.S. and foreign higher education institutions from worldwide and per-country caps. Applicants must already have an approved EB1 or EB2 petition and have graduated from a “research-intensive” institution. Additionally, Master’s graduates must have their employer sponsor be in a critical industry. Despite passing in the House, the bill was dropped in conference before the enactment of the CHIPS and Science Act of 2022 (H.R. 4521, 2022).
NDAA administration ask, DoD Scientists and Experts. An amendment proposed to the NDAA for fiscal year 2023, drawn from a request of the Department of Defense to secure the admission of essential scientists and other technical experts to enhance the technological superiority of the U.S. In practice, this would be implemented as a special immigrant visa for individuals working in specific fields or on research advancing national security, as determined by the Department of Defense. The provision was never voted on in the House or Senate (DoD Scientists and Experts, 2022).
NDAA amendment, Advanced STEM Degrees. An amendment proposed in the House to the NDAA for 2023 that exempts select STEM PhDs from worldwide green card limits and per-country caps. To qualify, applicants must already have an approved EB1 or EB2 petition, have graduated from a “research-intensive” institution (though there is no requirement that they confer their degree from a U.S. institution of higher education), and work in a field critical to national security. Despite bipartisan support, the bill was never voted on in the House or Senate: the House Rules Committee ruled that the amendment was not budget-neutral (Advanced STEM Degrees, 2022).
S. 2384. A bill proposed to exempt STEM Master’s or PhD graduates from any U.S. higher education institution from worldwide green card limits and per-country caps. To qualify, graduates would need to be employed by or have an employment offer from a U.S. employer who has completed the DOL Labor Certification process and be compensated a salary in excess of their occupation-level median. Additionally, the bill establishes permission for F-students enrolled in a STEM program to seek legal permanent residence and still maintain F-1 student status in the U.S. The bill has been introduced in the last three congresses in both houses, but has not yet been voted on in either the House or Senate (S. 2384, 2023).
NDAA amendment, Defense Researchers. An amendment proposed in the House to the NDAA for FY25, with possible companion amendment in the Senate, that allows up to 5,000 individuals each year who either hold STEM PhDs related to fields critical to national security or at least six years of experience in such fields to obtain new conditional green card status, and explicitly anticipates eligibility for international students earning STEM degrees in the U.S. or experts working abroad. The STEM experts must be citizens of a FVEY, QUAD, or NATO country, and will utilize a new conditional green card classification with mandated screening and new vetting programs. Conditions to green card status are removable after satisfactory vetting and three years of R&D employment in certain fields, without tying status to a singular employer, where qualifying employment is limited to projects funded or overseen by DoD, or other agencies, or in fields critical to national security (FORTRESS Act, 2024).
Current data and research needs
Measuring, and estimating the drivers of stay rates for foreign national STEM advanced degree holders
In many policy discussions around STEM immigration, key questions are raised about numbers that no one has data on – implying that policy analysts and decision-makers in the executive and legislative branches do not have access to many of the key facts that would, ideally, form the basis for evidence-informed policy design and implementation. As a leading example, policy discussions related to providing additional green cards for foreign national STEM advanced degree holders would benefit from knowing answers to questions such as: How many immigrants with STEM PhDs became lawful permanent residents annually? Of new STEM PhDs earned in the U.S., what share leave the U.S. versus work initially on temporary visas versus secure legal permanent residence status? Of those who leave the U.S. initially, what share ever return? Of those who initially work on temporary visas, what share stay and eventually transition on to have legal permanent resident status and how long does that take? And, with regard to STEM PhDs earning their degree outside the U.S., how many make their way to the U.S. and how many initially come as post-doctoral fellows or through other pathways?
A natural starting point for such questions is the National Science Foundation (NSF)’s Survey of Earned Doctorates, which annually attempts to gather information on the census of newly minted PhD graduates from U.S. universities and includes some information on their post-doctoral plans, and the Survey of Doctorate Recipients, which draws its pool of potential respondents from the Survey of Earned Doctorates and attempts to follow them longitudinally. For example, Zwetsloot et al. (2020) use the Survey of Earned Doctorates to document that intention-to-stay rates among international PhD graduates – who account for a significant portion of STEM PhD graduates from U.S. universities – are 70 percent or higher in all STEM fields, and are above 85 percent for students from Iran, India, and China. A follow-up by Corrigan et al. (2022) uses the Survey of Doctorate Recipients to document that roughly 77 percent of STEM PhD graduates from U.S. universities between 2000 and 2015 were still living in the U.S.
Taken at face value, these findings could be interpreted as saying that foreign national STEM PhD students trained at U.S. universities who want to stay in the U.S. post-graduation largely are able to find pathways through which to do so. Of course, even if all individuals who want to stay are able to do so – eventually – doesn’t mean from a policy perspective that the currently existing pathways under which individuals do stay are timely or feature optimal predictability. Indeed, Olszewski et al. (2024) argue: “…one of the most widely cited reasons driving foreign STEM talent to leave the United States (and discouraging it from coming) is the country’s difficult-to-navigate immigration and naturalization rules governing who can come and who can stay.” Moreover, data on past cohorts of foreign national STEM PhDs is not necessarily predictive of what is happening today nor what might happen in the future, given dramatic increases in visa backlogs and uncertainty about our high-skilled immigration system’s adjudications both in petition adjudications[ref 5] and visa applications.[ref 6]
Moreover, by construction, the Survey of Earned Doctorates and Survey of Doctorate Recipients of course focus on PhD recipients, and analogous individual-level data is not available – to the best of our knowledge – on bachelor’s and master’s degree graduates. Research such as Beine et al. (2022), who analyze university-by-year-level aggregate data on counts of international students, suggests that only around 23 percent of foreign nationals in U.S. master’s programs transition into the U.S. workforce.
An alternative starting point would be administrative data on F-1 visas supporting international students to study at U.S. universities linked to longitudinal data from either Census or Treasury which could follow individuals who at some point appear on F-1 visas over time. Many foreign nationals that firms wish to hire start out on F-1 visas, and such data could provide the basis for research on how policy and non-policy factors might affect the stay rates of students. For example, how have policy and regulatory changes such as the H-1B cap exemption for nonprofit research organizations, changes in the time allowed for temporary employment for international students under the OPT program,[ref 7] and increased use of J visas for researchers changed stay rates, adjustments of status, and work behavior of students originally trained at U.S. universities?
In recent years both Census and Treasury have made tremendous progress in compiling datasets – such as the Census’s Business Dynamics Statistics of Innovative Firms (BDS-IF) project (Goldschlag & Perlman, 2017) – that start to lay the groundwork for tabulating these types of statistics, but they are missing one critical input which is that they lack data on temporary visas – e.g. which students are in the U.S. on F-1 visas, which researchers are in the U.S. on J-1 visas, which STEM PhDs are employed on H-1B visas. These types of data reside at agencies like U.S. Citizenship and Immigration Services (USCIS) and the Student and Exchange Visitor Program Office (SEVP) of Immigration and Customs Enforcement, components of the U.S. Department of Homeland Security (DHS), and the Bureau of Consular Affairs at the U.S. Department of State. But in principle, these records can be linked at the individual level with administrative data from Census or Treasury to begin to measure and study the types of questions outlined above. Importantly, such linked Census records could then be made available to other researchers via the Federal Statistical Research Data Centers (FSRDC) infrastructure (US Census Bureau, 2024), which research suggests could meaningfully impact scientific progress on this topic (Nagaraj & Tranchero, 2023).
Modeling the expected effects of policy counterfactuals
As illustrated in Table 2 above, the handful of recent legislative proposals in this area – while similar in their broad goal of attracting and retaining foreign national STEM advanced degree holders – differ along several dimensions that may or may not be quantitatively important. Take as two examples S. 2384 (the Keep STEM Talent Act of 2023) and H.R. 4521 (the America COMPETES Act of 2022) Section 80303. S. 2384 required a job offer paying more than median wages for a given occupation and geographic area, and was exclusively limited to employers with an approved Labor Certification. Section 80303 was limited to STEM graduate degrees earned at universities capable of providing research-intensified training but permitted qualifying degrees from both the U.S. and abroad, and included those working with extraordinary ability or in the national interest who Congress has exempted from the Labor Certification. The two pieces of legislation also differed in which classes of employment-based green cards (EB1, EB2, EB3) were exempted from statutory limits.
Better understanding the expected effects of these differences in policy design would directly inform policy development efforts, but would also inform various modeling efforts that are required of agencies across the executive and legislative branches. For example, the Congressional Budget Office (CBO) – sometimes in collaboration with the staff of the Joint Committee on Taxation (JCT) – is required to provide information to Congress, and the public, on the expected budgetary and economic effects of such legislative efforts. When CBO modeled (Congressional Budget Office, 2022) the budgetary effects of H.R. 4521, Section 80303, CBO analysts needed to estimate how exempting employment-based green cards from statutory limits for applicants (as well as their accompanying spouse and minor children) who have earned a doctoral or master’s degree in a STEM field at a U.S. research institution or foreign equivalent would affect the number and characteristics of foreign nationals in the U.S. over time (particularly over the 10-year budget window).
To provide a flavor of what type of work is required for such modeling, consider as a publicly available example the recent work of Esche et al. (2023) who developed a population modeling approach for a H.R. 4521, Section 80303-style legislative provision which was shared with the Penn Wharton Budget Model for use in modeling the expected budgetary effects of granting green cards to immigrants with advanced STEM degrees (Penn Wharton, 2024; Elmendorf & Williams, 2024).
At a high level, Esche et al. (2023) attempt to articulate and (roughly) estimate every mechanism through which a policy change to employment-based green card quotas affects the number of foreign nationals in the United States and the composition of the U.S. population by immigration status, education, country-of-origin, gender, and age. The starting point for their work is recognition of the fact that an increase in the number of green cards made available by law does not translate into a one-for-one increase in the number of people in the U.S. Moreover, there is no straightforward way to simply divide newly available green cards between new arrivals and people already in the U.S. Instead, behavioral responses by the foreign-born population must be accounted for which significantly complicates this picture. For example, the availability of new green cards changes expected wait times and therefore has an effect on an individual’s choices between green cards and temporary visas; choices between staying in the U.S. versus leaving; and the choice to come to the U.S. at all. Furthermore, these choices can in turn have cascading effects across the immigration system. For instance, someone who chooses to apply for a green card instead of a temporary visa such as an H-1B may free up a temporary visa slot for another individual who is not eligible for the newly uncapped green card pathway. Taken together, Esche et al. (2023) attempt to catalog an exhaustive list of sixteen different mechanisms by which changes to the number of employment-based green cards affect the size and composition of the U.S. population over time.
Esche et al. (2023) then present methods to quantitatively estimate the magnitude of each of these sixteen mechanisms. The methods were intentionally designed to be feasibly implemented in data sources that are currently publicly available. Esche et al. (2023) then apply the implied estimates to assess the expected population effects of an H.R. 4521, Section 80303-style legislative provision over time. For example, an increase in employment-based green cards reduces the expected wait time for individuals in the green card backlog and shifts the age composition of those receiving green cards. Combining green card backlog modeling from the Congressional Research Service (Congressional Research Service, 2020) and public data on the age of those in the green card backlog, Esche et al. (2023) track how the age composition changes over time as new green cards change the pace at which green cards are awarded. In addition, the backlog wait time modeling exercise identifies the necessary time shifting for when individuals change immigration status under a policy change. Estimated wait times are also combined with recent literature from Kahn & MacGarvie (2020) and Khosla (2018) on green card delays and the stay rates of international students to estimate changes in retention. Esche et al. (2023) also draw on work by Zavodny (2022), who provides tabulations of characteristics of derivative H-4 spouses who would be authorized to work in the U.S., and estimates from Carr & Tienda (2013) were applied to estimate expected sponsorship patterns via family-based pathways.[ref 8]
This population modeling by Esche et al. (2023) was shared with the Penn Wharton Budget Model, a nonpartisan, research-based initiative at the Wharton School at the University of Pennsylvania that provides economic analysis of the budgetary impact of proposed policy changes. Penn Wharton in turn applied this work to estimate the expected budgetary effects of granting green cards to immigrants with advanced STEM degrees (Penn Wharton, 2024), providing – essentially – an analogous estimate to CBO’s official cost estimate of the budgetary effects of H.R. 4521, Section 80303.
Of course, many executive and legislative branch efforts other than just the Congressional Budget Office are required to analyze the expected effects of Section 80303-style proposals. For example, the White House also takes efforts designed to model the expected effects of such policies as an input into work across various components of the Executive Office of the President. Applied modeling work estimating the expected effects of policy counterfactuals – along the lines of the work of Esche et al. (2023) – could thus be useful to a broad set of policy analysts and federal agencies.
Estimating the effects of foreign national STEM advanced degree holders on the U.S. economy
Proposals to increase the number of high-skill immigrants in a country frequently tout the potential for substantial economic benefits via additional labor supply, entrepreneurship, and innovation. The academic literature suggests – in a variety of ways – that immigrants make substantial contributions across commercial, scientific, and other domains. However, the literature offers relatively limited evidence on the expected effects of specific policy change, and is thus limited in its ability to inform policy development efforts in terms of guiding what types of policy changes are likely to be most effective in achieving a given policy goal.
One frequently discussed policy proposal is to guarantee legal permanent residency for foreign-born STEM PhD students, especially those earning degrees in the U.S. How would such a policy affect the number and characteristics of the foreign-born present in the U.S., and what would the economic effects of this type of policy change be? Raymond and Soltas (in progress) designed a randomized experiment aimed at shedding light on these questions by leveraging experimental variation in “de-facto” immigration policy. Their research is leveraging an unusual policy environment that has emerged due to changes in regulatory guidance around the “O-1A” visa for individuals with “extraordinary ability.” As discussed in Section 3, this visa category – once rarely used – now explicitly covers accomplished foreign-born STEM PhD candidates at U.S. universities.
However, the USCIS guidance that explicitly clarified that guidance has not yet been widely diffused, and take-up of O-1A visas is, not unexpectedly, low. Raymond and Soltas’s experiment will make the O-1A guidance salient to a random subset of eligible candidates through an encouragement design. Leveraging this variation, Raymond and Soltas will then be able to track researchers’ outcomes longitudinally and across a range of potential impacts: where they live, where they work, entrepreneurship, academic research, and patenting.
Conclusions
Economic research has the opportunity to lay the groundwork for fact-based and evidence-based policy debates over critical policy questions, such as how best to encourage innovation and economic growth. Economic researchers have made critical contributions to understanding many key aspects of the economics of immigration – such as estimating the self-selection of immigrants, the economic impacts of immigrants on natives, and analyses of the impacts of specific immigration policies such as the H-1B visa lottery. However, economists and economic research have been less attentive to the types of policy changes related to high-skilled immigration that have been pursued in recent years by the U.S. via executive branch and legislative policy decisions. Working on a number of high-skilled immigration policy development efforts propels the attempt of this paper to highlight areas where investments in generating additional economic data and research would be invaluable in informing more evidence-based policy discussions in the coming years.
References
Advanced STEM Degrees. 2022. https://amendments-rules.house.gov/amendments/LOFGRE_036_ xml220705125918927.pdf.
American Civil Liberties Union. 2020. Discriminatory Bans and 212(f) Authority. https://usa.ipums.org/usa-action/variables/group.
Anderson, Stuart. 2022. What Happened To The Bills On Employment-Based Immigration? Forbes. https://www.forbes.com/sites/stuartanderson/2022/08/22/ what-happened-to-the-bills-on-employment-based-immigration/?sh=5e75032c5658.
Arnold, Zachary. 2019. Immigration Policy and the U.S. AI Sector. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/ immigration-policy-and-the-u-s-ai-sector/.
Beine, Michel, Peri, Giovanni, & Raux, Morgan. 2022 (September). International College Students’ Impact on the US Skilled Labor Supply. Working Paper 30431. National Bureau of Economic Research.
Bier, David. 2023. Why Legal Immigration Is Impossible for Nearly Everyone. Cato Institute. https://www.cato.org/blog/why-legal-immigration-nearly-impossible.
Borjas, George J. 1987. Self-Selection and the Earnings of Immigrants. The American Economic Review, 77(4), 531–553.
Brannon, Ike, & McGee, M. Kevin. 2019a. Hurting Americans in Order to Hurt Foreigners. Cato Institute: Regulation. https://www.cato.org/sites/cato.org/files/serials/files/regulation/2019/ 3/regulation-v42n1-2.pdf.
Brannon, Ike, & McGee, M. Kevin. 2019b. Repealing H-4 Visa Work Authorization: A Cost-Benefit Analysis. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3349786.
Card, David. 1990. The Impact of the Mariel Boatlift on the Miami Labor Market. Industrial and Labor Relations Review, 43(2), 245–257.
Carr, Stacie, & Tienda, Marta. 2013. Family Sponsorship and Late-Age Immigration in Aging America: Revised and Expanded Estimates of Chained Migration. Popul Res Policy Rev, 32(6). .
Miles, Wilson, Chase, Jordan, & Neufeld, Jeremy. forthcoming. Strengthening the Defense Industrial Base with International STEM Talent. National Defense Industrial Association.
Chiswick, Barry R. 1978. The Effect of Americanization on the Earnings of Foreign-born Men. Journal of Political Economy, 86(5), 897–921.
Congressional Budget Office. 2022. Estimated Budgetary Effects of H.R. 4521, the America COMPETES Act of 2022. https://www.cbo.gov/publication/57898.
Congressional Research Service. 2019. Foreign STEM Students in the United States. https://crsreports.congress.gov/product/pdf/IF/IF11347.
Congressional Research Service. 2020. The Employment-Based Immigration Backlog. https://crsreports.congress.gov/product/pdf/R/R46291.
Congressional Research Service. 2023. The U.S. Defense Industrial Base: Background and Issues for Congress. https://crsreports.congress.gov/product/pdf/R/R47751.
Corrigan, Jack, Dunham, James, & Zwetsloot, Remco. 2022. The LongTerm Stay Rates of International STEM PhD Graduates. Center for Security and Emerging Technology. https://cset.georgetown.edu/wp-content/uploads/ CSET-The-Long-Term-Stay-Rates-of-International-STEM-PhD-Graduates.pdf.
Department of Homeland Security. 2022. Update to the Department of Homeland Security STEM Designated Degree Program List. Federal Register, 87(14). https://www.govinfo.gov/content/pkg/ FR-2022-01-21/pdf/2022-01188.pdf.
Department of Homeland Security. 2023. Update to the Department of Homeland Security STEM Designated Degree Program List. Federal Register, 88(132). https://www.govinfo.gov/content/pkg/ FR-2023-07-12/pdf/2023-14807.pdf.
DoD Scientists and Experts. 2022. Admission of essential scientists and other experts to enhance the technological superiority of the United States. https://ogc.osd.mil/Portals/99/OLC%20FY%202023%20Proposals/31May2022Proposals.pdf?ver=FHf9Fflw-lv9qpFz38-hNQ%3d%3d×tamp= 1654172693619.
Doran, Kirk, Gelber, Alexander, & Isen, Adam. 2022. The Effects of High-Skilled Immigration Policy on Firms: Evidence from Visa Lotteries. Journal of Political Economy, 130(10), 2501–2533.
Elmendorf, Douglas, & Williams, Heidi. 2024. How Does Accounting for Population Change Affect Estimates of the Effect of Immigration Policies on the Federal Budget? Penn Wharton at the University of Pennsylvania. https://budgetmodel.wharton.upenn.edu/issues/2024/1/18/ population-change-effect-immigration-policies-on-federal-budget.
Emerg. Supplemental Assist. to Ukraine. 2022. https://www.whitehouse.gov/wp-content/ uploads/2022/04/FY_2022_emergency_supplemental_assistance-to-ukraine_4.28.2022.pdf.
Esche, Matthew, Neufeld, Jeremy, & Williams, Heidi. 2023. Estimating counterfactual population and status. https://drive.google.com/file/d/1HNcVldPV3r3Qtd0iYaQpsRauUsNOYVuW/view.
Executive Office of the President. 2020. Suspension of Entry as Nonimmigrants of Certain Students and Researchers From the People’s Republic of China. Presidential Proclamation 10052. https://www.govinfo.gov/content/pkg/FR-2020-06-25/pdf/2020-13888.pdf.
Executive Office of the President. 2024. Office of Information and Regulatory Affairs Executive Order Submissions Under Review. https://www.reginfo.gov/public/do/eoReviewSearch.
Executive Office of the President: Office of Management and Budget. 2023. Multi-Agency Research and Development Priorities for the FY 2025 Budget. https://www.whitehouse.gov/wp-content/uploads/2023/08/FY2025-OMB-OSTP-RD-Budget-Priorities-Memo.pdf.
Executive Order of October 30. 2023. Executive Order 14110. Federal Register: Presidential Documents, 88(210). https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.
FORTRESS Act. 2024. https://amendments-rules.house.gov/amendments/WITTMA_110_xml240531112750060.pdf.
Future of Defense Task Force. 2020. Future of Defense Task Force Report 2020. House of Representatives Armed Services Committee. https://houlahan.house.gov/uploadedfiles/ future-of-defense-task-force-final-report-2020.pdf.
Gilmer, Ellen M. 2022a. Immigration Measure for STEM Workers Adrift After Defense Flop. Bloomberg Government. https://news.bloomberglaw.com/daily-labor-report/ immigration-measure-for-stem-workers-adrift-after-defense-flop.
Gilmer, Ellen M. 2022b. STEM Immigration Pathway Gets Fresh Life in Defense Proposal. Bloomberg Government. https://about.bgov.com/news/ stem-immigration-pathway-gets-fresh-life-in-defense-proposal/.
Goldschlag, Nathan, & Perlman, Elisabeth. 2017. Business Dynamic Statistics of Innovative Firms. Census Working Papers. https://www.census.gov/library/working-papers/2017/adrm/ ces-wp-17-72.html.
House Budget Committee. 2020. Machines, Artificial Intelligence, and the Workforce: Recovering and Readying Our Economy for the Future. Hearing Before the Committee on the Budget House of Representatives One Hundred Sixteenth Congress Second Session. https://www.govinfo.gov/content/pkg/ CHRG-116hhrg42322/html/CHRG-116hhrg42322.htm.
House China Task Force. 2020. China Task Force Report. House of Representatives One Hundred Sixteenth Congress Second Session. https://foreignaffairs.house.gov/wp-content/uploads/2020/ 11/China-Task-Force-Final-Report-11.6.20.pdf.
H.R. 4350. 2021. National Defense Authorization Act for Fiscal Year 2022. https://www.congress.gov/ 117/bills/hr4350/BILLS-117hr4350pcs.pdf.
H.R. 4521. 2022. America COMPETES Act of 2022. https://www.congress.gov/117/bills/hr4521/ BILLS-117hr4521eh.pdf.
H.R. 6395. 2020. William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021. https://www.congress.gov/116/bills/hr6395/BILLS-116hr6395eh.pdf.
H.R. 7256. 2020. National Security Innovation Pathway Act. https://www.congress.gov/116/bills/ hr7256/BILLS-116hr7256ih.pdf.
Hunt, Will, & Zwetsloot, Remco. 2020. The Chipmakers: U.S. Strengths and Priorities for the High-End Semiconductor Workforce. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/ the-chipmakers-u-s-strengths-and-priorities-for-the-high-end-semiconductor-workforce/.
Kaggle. 2019. Kaggle’s State of Data Science and Machine Learning 2019: Enterprise Executive Summary. https://www.scribd.com/document/486155867/Kaggle-State-of-Data-Science-and-Machine-Learning-2019.
Kahn, Shulamit, & MacGarvie, Megan. 2020. The impact of permanent residency delays for STEM PhDs: Who leaves and why. Research Policy, 49(9), 103879. STEM migration, research, and innovation.
Kaur, Maninder, & Venegas-Gomez, Araceli. 2022. Defining the quantum workforce landscape: a review of global quantum education initiatives. Optical Engineering, 61(8), 081806.
Khosla, Pooja. 2018. Wait time for permanent residency and the retention of immigrant doctoral recipients in the U.S. Economic Analysis and Policy, 57, 33–43.
Mahajan, Parag, Morales, Nicolas, Shih, Kevin, Chen, Mingyu, & Brinatti, Agostina. 2024. The Impact of Immigration on Firms and Workers: Insights from the H-1B Lottery. https://agostinabrinatti.github.io/AgostinaBrinatti_Website/Brinatti_H1B_lotteries_Census.pdf.
Mervis, Jeffrey. 2023. New U.S. immigration rules spur more visa approvals for STEM workers. Science. https://www.science.org/content/article/ new-u-s-immigration-rules-spur-more-visa-approvals-stem-workers.
Nagaraj, Abhishek, & Tranchero, Matteo. 2023. How Does Data Access Shape Science? Evidence from the Impact of U.S. Census’s Research Data Centers on Economics Research. NBER Working Papers. https://www.nber.org/papers/w31372.
National Foundation for American Policy. 2019. H-1B Denial Rates: Past and Present. NFAP Policy
Brief. https://nfap.com/wp-content/uploads/2019/04/H-1B-Denial-Rates-Past-and-Present. NFAP-Policy-Brief.April-2019.pdf.
National Science and Technology Council. 2016a. A 21st Century Science, Technology, and Innovation Strategy for America’s National Security. Committee on Homeland and National Security. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/NSTC/ national_security_s_and_t_strategy.pdf.
National Science and Technology Council. 2016b. The National Artificial Intelligence Research and Development Strategic Plan. Networking and Information Technology Research and Development Subcommittee. https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf.
National Science and Technology Council. 2024. Critical and Emerging Technologies List Update. Fast Track Action Subcommittee on Critical and Emerging Technologies. https://www.whitehouse.gov/ wp-content/uploads/2024/02/Critical-and-Emerging-Technologies-List-2024-Update.pdf.
National Science Board. 2015. Revisiting the STEM Workforce: A Companion to Science and Engineering Indicators 2014. https://www.nsf.gov/pubs/2015/nsb201510/nsb201510.pdf.
National Science Board. 2022. International STEM Talent is Crucial for a Robust U.S. Economy. https://www.nsf.gov/nsb/sei/one-pagers/NSB-International-STEM-Talent-2022.pdf.
National Science Foundation. 2023. 2021 Graduate Enrollment in Science, Engineering, and Health Fields at All-Time High as Postdocs Continue to Decline. National Center for Science and Engineering Statistics. https://ncses.nsf.gov/pubs/nsf23311/table/4.
Neufeld, Jeremy. 2022. STEM Immigration Is Critical to American National Security. The Insitute for Progress. https://ifp.org/stem-immigration-is-critical-to-american-national-security/.
Olszewski, Thomas D., Sabatini, John E., Kirk, Hannah L., Hazan, Gabriella G., & Liu, Irina. 2024. Characterizing the Loss of Talent From the U.S. STEM Ecosystem. Institute for Defense Analyses: Science and Technology Policy Institute. https://www.ida.org/-/media/feature/publications/C/Ch/ Characterizing-the-Loss-of-Talent-From-the-US-STEM-Ecosystem/Product-3001891.pdf.
O’Brien, Connor, & Ozimek, Adam. 2024. Foreign-born skilled workers play a critical role in strategically significant industries. Economic Innovation Group. https://eig.org/hsi-in-strategic-industries/.
Penn Wharton. 2024. Budgetary effects of granting green cards to immigrants with advanced STEM degrees. University of Pennsylvania. https://budgetmodel.wharton.upenn.edu/issues/2024/1/18/ budgetary-effects-of-stem-green-cards.
Rampell, Catherine. 2022. On STEM, give Biden credit for his efforts to repair the national reputation that Trump trashed. The Washington Post: Opinion. https://www.washingtonpost.com/opinions/2022/01/24/ stem-give-biden-credit-his-efforts-repair-national-reputation-that-trump-trashed/.
S. 2384. 2023. Keep STEM Talent Act of 2023. https://www.congress.gov/118/bills/s2384/ BILLS-118s2384is.pdf.
Senate Armed Services Committee. 2020. Fiscal Year 2021 National Defense Authorization Act. https://www.armed-services.senate.gov/imo/media/doc/FY%2021%20NDAA%20Summary.pdf.
Snyder, Alison, & Allen-Ebrahimian, Bethany. 2022. Exclusive: Congress urged to ease immigration for foreign science talent. Axios. https://www.axios.com/2022/05/09/ national-security-china-international-science-tech-talent.
Snyder, Alison, & Cai, Sophia. 2023. Experts push Congress for more high skilled immigrants to compete with China. Axios: Politics and Policy. https://www.axios.com/2023/05/15/ science-tech-stem-china-immigration.
Taiwan Semiconductor Manufacturing Company. 2020. Annual Report. https://investor.tsmc.com/static/annualReports/2020/english/index.html.
The National Security Commission on Artificial Intelligence. 2021. Final Report. https://cybercemetery.unt.edu/nscai/20211005220330/https://www.nscai.gov/.
US Census Bureau. 2024. Research Data Centers. Federal Statistical Research Data Centers. https://www.census.gov/about/adrm/fsrdc/locations.html.
US Citizenship and Immigration Services. 2022a. Policy Manual: Chapter 4 – O-1 Beneficiaries. 2. https://www.uscis.gov/policy-manual/volume-2-part-m-chapter-4.
US Citizenship and Immigration Services. 2022b. Policy Manual: Chapter 5 – Advanced Degree or Exceptional Ability. 6. https://www.uscis.gov/policy-manual/volume-6-part-f-chapter-5.
US Citizenship and Immigration Services. 2023. Students and Exchange Visitors. https://www.uscis.gov/working-in-the-united-states/students-and-exchange-visitors/.
US Citizenship and Immigration Services. 2024a. Chapter 7 – Schedule A Designation Petitions. 6. https://www.uscis.gov/policy-manual/volume-6-part-e-chapter-7.
US Citizenship and Immigration Services. 2024b. STEM-Related Petition Trends: EB-2 and O-1A Categories FY 2018 – FY 2023. https://www.uscis.gov/sites/default/files/document/reports/ stem_related_petition_trends_eb2_and_o1a_categories_factsheet_fy23.pdf.
US Department of Defense. 2022. 2022 National Defense Strategy of the Unites States. https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/2022-NATIONAL-DEFENSE-STRATEGY-NPR-MDR.PDF.
US Department of Homeland Security. 2023a. DHS STEM Designated Degree Program List. https://www.ice.gov/doclib/sevis/pdf/stemList2023.pdf.
US Department of Homeland Security. 2023b. Modernizing H–1B Requirements, Providing Flexibility in the F–1 Program, and Program Improvements Affecting Other Nonimmigrant Workers. Federal Register, 88(203). https://www.govinfo.gov/content/pkg/FR-2023-10-23/pdf/2023-23381.pdf.
US Department of Labor. 2023. Labor Certification for Permanent Employment of Foreign Workers in the United States; Modernizing Schedule A To Include Consideration of Additional Occupations in Science, Technology, Engineering, and Mathematics (STEM) and Non-STEM Occupations. Federal Register, 88(244). https://www.govinfo.gov/content/pkg/FR-2023-12-21/pdf/2023-27938.pdf.
US Department of State. 2009. 2009 Revised Exchange Visitor Skills List. Federal Register. https://www.federalregister.gov/documents/2009/04/30/E9-9657/ 2009-revised-exchange-visitor-skills-list.
US Department of State. 2022. Early Career STEM Research Initiative: FAQs. Bridge USA Programs. https://j1visa.state.gov/programs/early-career-stem-research-initiative/#faqs.
US House of Representatives. 2023. Reset, Prevent, Build: A Strategy to Win America’s Economic Competition with the Chinese Communist Party. House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party. https://selectcommitteeontheccp.house.gov/sites/evo-subsites/selectcommitteeontheccp.house.gov/files/evo-media-document/reset-prevent-build-scc-report.pdf.
White House. 2021. Remarks by National Security Advisor Jake Sullivan at the National Security Commission on Artificial Intelligence Global Emerging Technology Summit. Briefing Room: Statements and Releases. https://www.whitehouse.gov/nsc/briefing-room/2021/07/13/ remarks-by-national-security-advisor-jake-sullivan-at-the-national-security-commission-on-artificial
White House. 2022. FACT SHEET: Biden-Harris Administration Actions to Attract STEM Talent and Strengthen our Economy and Competitiveness. Briefing Room: Statements and Releases. https://www.whitehouse.gov/briefing-room/statements-releases/2022/01/21/ fact-sheet-biden-harris-administration-actions-to-attract-stem-talent-and-strengthen-our-economy-and
Zavodny, Madeline. 2022. Title: H-4 Visa Holders: An Underutilized Source of Skilled Workers. National Foundation for American Policy. https://nfap.com/wp-content/uploads/2022/11/ H-4-Visa-Holders.NFAP-Policy-Brief.2022-2.pdf.
Zwetsloot, Remco, Feldgoise, Jacob, & Dunham, James. 2020. Trends in U.S. Intentionto-Stay Rates of International Ph.D. Graduates Across Nationality and STEM Fields. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/ trends-in-u-s-intention-to-stay-rates-of-international-ph-d-graduates-across-nationality-and-stem-fi#:~:text=They%20are%20highest%E2%80%94between%2085,Iran%2C%20India%2C%20and%20China.
Zwetsloot, Remco, Corrigan, Jack, Weinstein, Emily, Peterson, Dahlia, Gehlhaus, Diana, & Fedasiuk, Ryan. 2021. China is Fast Outpacing U.S. STEM PhD Growth. Center for Security and Emerging Technology. https://cset.georgetown.edu/wp-content/uploads/China-is-Fast-Outpacing-U. S.-STEM-PhD-Growth.pdf.
Tables
Table 1: Selected actions by departments and agencies targeting foreign national advanced STEM degree holders
Agency Policy | Action | Benefits | Prospects & Limitations |
---|---|---|---|
STEM OPT DHS guidance STEM OPT Degree List Update 2022 STEM OPT Degree List Update 2023 | Update the Designated Degree Program List for post-completion STEM Optional Practical Training adding 22 fields in January 2022 and 8 fields in July 2023, to reflect new, largely multidisciplinary fields of study, expanding the STEM fields in which international students may remain in the US and work after earning a U.S. STEM degree. | Optional Practical Training (OPT) for STEM grads allows up to three years of employment in the US after graduation. The annual nominations process will allow DHS to keep the degree list current for STEM OPT. | DHS Student and Exchange Visitor Program (SEVP) is fast approaching a modernized degree list for STEM OPT, absent future changes by the National Center for Education Statistics (NCES) adding new fields to or otherwise revising the CIP (Classification of Instructional Programs). |
O-1A DHS guidance O-1A Policy Manual Guidance and Appendix, 2022 | January 2022 USCIS Policy Manual update that, for the first time since the O-1A category was created by Congress in 1990, provides written guidance as to how STEM PhDs may qualify, by updating the USCIS Policy Manual, including an Appendix table, to clarify for both agency adjudicators and stakeholders how USCIS evaluates evidence to determine eligibility for O-1A nonimmigrants of extraordinary ability. | The O-1A nonimmigrant visa category for extraordinary ability is uncapped, without any per country limits, with no maximum period of stay. | Even after new policy guidance, O1A uptake for STEM activities represents only about 10% of foreign-born STEM PhDs in the US earning doctorates and completing post-doctoral fellowships, which suggests it remains underused. (Each year in the US there are just under 14,000 international students earning a PhD and around 35,000 international STEM PhD holders participating in a postdoc, while FY23 data show 4,560 O1A petitions approved for STEM activities.) |
NIW DHS guidance NIW Policy Manual Guidance, 2022 | January 2022 USCIS Policy Manual update that, for the first time since the National Interest Waiver category for green card eligibility was created by Congress in 1990, provides written guidance on how STEM Masters or PhDs may qualify for green card eligibility if their work is of substantial merit and in the national interest, by updating the USCIS Policy Manual to address requests for national interest waivers for advanced STEM degree professionals, providing some objective criteria for when work is typically in the national interest, such as when a noncitizen is working in a critical and emerging technology field or an endeavor tied to the annual R&D priorities identified by the OSTP and OMB. | Individuals approved for NIW classification for Employment-Based Second Preference advanced degree immigrants are largely self-petitioned and not tied to a sponsoring employer for their permanent residency process, and are the beneficiaries of a more certain and timely process to secure eligibility confirmation from DHS. | Only Congress can create more immigrant visa numbers for green card status. Thus, even if NIW approval as an individual making contributions to endeavor in the national interest like critical and emerging technologies, one cannot obtain final lawful permanent resident status any faster than congressionally mandated worldwide limits and per country caps provide. |
J-1 Researcher DOS guidance STEM Research Initiative, 2022 | Utilize existing State Department regulations governing exchange programs for researchers and scholars, to allow entities designated by State, including universities as well as nonprofits, to sponsor foreign researchers to be employed in private industry STEM R&D, including technology ventures spun off by universities to commercialize technology. The STEM Initiative explains that foreign-born STEM experts, at all academic levels, may be in the US to conduct and participate in STEM R&D efforts, hosted by industry on J-1 visas, including STEM post-docs who do not need to be solely on campus. | J-1 visas for researchers carry a 5-year validity period, without a congressionally established numerical limit or per country caps. Significant numbers of foreign-born STEM Masters and PhDs could be hosted by companies, adding a global perspective to R&D teams at US firms. Relevant given that about 90% of experimental STEM development in the US and approaching 60% of US applied STEM research is funded by and performed by companies. | While the goals of the J-1 exchange visitor program to promote the exchange of ideas fit nicely with the nature of scientific inquiry, exchange visitors are required to intend to return home and many individual J-1 visa holders are subject to a 2-year home residency requirement based on the Skills List, including almost all scientists, technologists, and engineers from India and China. |
H-1B research cap exemptions DHS regulation H-1B Modernization Notice of Proposed Rule Making, 2023 (at p. 72883-86 and 72962-63 of NPRM) – in process | EO 14410, at Section 5.1(d), requires the Department of Homeland Security to continue its rulemaking process to modernize the H-1B program and enhance its integrity and usage. | The NPRM includes a proposal to clarify whenever research is a fundamental activity of a nonprofit that organization might qualify as an H-1B cap exempt entity and whenever industry partners with nonprofit or university research and an H-1B professional employee of a company spends at least 50% of her time on that collaborative effort that individual might be cap exempt. | Final rule expected later in calendar year 2024. |
J-1 Exchange Visitor Skills List DOS regulation Final rule at OIRA for review, 2024 – in process | EO 14410, at Section 5.1(b), requires the Department of State to consider rulemaking establishing new criteria to designate countries and skills on the Exchange Visitor Skills List as it relates to the 2-year foreign residence requirement, including those skills that are critical to the US, and consider publishing updates to the 2009 Skills List. | The Skills List applies when DOS finds that skills being developed in the US by a J-1 visa holder are “clearly required” for the development of the J-1 visa holder’s home country. Currently 82 countries have chosen to participate in the Skills List. A revised Skills List methodology might allow more STEM experts from more countries to follow the science, technology, or engineering wherever it takes them. | Final rule on Skills List methodology expected spring 2024, with updated Skills List expected to follow. |
Schedule A DOL regulation Request for Information, 2023 – in process | EO 14410, at Section 5.1(e), requires the Department of Labor, for purposes of considering updates to the so-called Schedule A list of occupations, to publish a Request for Information to identify AI and other STEMrelated occupations for which there is an insufficient number of ready, willing, able, and qualified US workers. | A modernized Schedule A utilizing a self-executing, data-based methodology to identify types of employment for which there is relative scarcity in the US would allow a streamlined permanent residency process for those noncitizens working in those occupations, and would help the US understand educational or skills gaps to improve training and education for the domestic workforce. | RFI will close May 2024, unclear what DOL will do next. |
Table 2: Comparison of recent legislative efforts in the 116th, 117th, 118th congresses targeting foreign national advanced STEM degree holders
Leg Proposal | Which STEM Experts | Guardrails | Numbers * | Results |
---|---|---|---|---|
Standalone bill needs vehicle H.R. 7256 116th National Security Innovation Pathway Act | Employed in US industry or academia in research that would promote and protect the national security innovation base, or in basic or applied DoD-funded research in academia; or possesses expertise that will advance critical industries as identified pursuant to National Defense Strategy or the National Defense Science and Technology Strategy (NDAA19). | Knowing that NDAA21 was going to take numerous steps to reshape the defense industrial base as a national security innovation base, the National Security Innovation Pathway Act was a bipartisan effort by HASC to acknowledge that such a shift required top talent, including international STEM experts, making contributions to industrial capacity or critical industry innovation. | 101(a)(27) special immigrants starting with a cap of 100 principals annually and rising by 100 each year to 500 for 5th FY and beyond, with separate exemption from per country caps | Bipartisan bill by Reps. Langevin and Stefanik, Chair and Ranking Member on HASC subcommittee. No action on bill as introduced. |
NDAA amendment H.R. 4350 117th – Sec. 6446, H.R.6395 116th – Sec. 281, National Security Innovation Pathway Act for essential scientists and technologists was revised to become Langevin-Stefanik 2020 Amendment to NDAA21 and Langevin 2021 Amendment to NDAA22 | Contributing to the national security innovation base by working on DoD-funded basic or applied research projects at universities or possessing expertise that will advance development of critical technologies identified by DoD. | DoD to develop competitive process to identify qualifying individuals who are “essential” to advancing critical technologies or otherwise serve national security interests, and DHS to develop petitioning process. | 101(a)(27) special immigrants with cap of 10 principals annually and rising to 100 after 10th FY, with separate exemption from per country caps | Bipartisan amendment in HASC for FY21 NDAA (had to be limited to 10 principals to be budget neutral), passed the House September 2020, dropped in Conference before NDAA21 enactment. Amendment for FY22 NDAA (also limited to 10 principals), passed the House July 2021, dropped in Conference before NDAA22 enactment. |
Chips Act H.R. 4521 117th – Sec. 80303, Lofgren-driven provision in the House version of the tech/semiconductor competition legislation to modify immigration law concerning international advanced STEM degree holders | STEM Masters or PhD awarded by research universities in U.S. or abroad, in specified areas of study to also include medical residencies and fellowships (by CIP code). | Must have approved EB1 or EB2 petition under current law, reserved for advanced degree professionals including outstanding researchers or professors or those working in endeavors with substantial merit in the national interest. Issuing institution must offer research-intensive education as evidenced by at least $25M annual R&D investment with special provisions for MSIs or HBCUs. If STEM Masters, must work in critical industry. | Exempt from worldwide limits and per country caps by revision to 201(b)(1) | Passed the House February 2022 as part of America COMPETES Act, dropped in Conference before Chips and Science Act enactment August 2022. |
NDAA administration ask DoD Scientists and Experts 117th DoD’s ask in 2022 for NDAA23, to secure the admission of essential scientists and other experts to enhance the technological superiority of the United States | Masters, PhD, professional degree, or graduate fellowship from U.S. university that entailed research in a field important to national security, or employed or offered job in such a field, or founded a U.S. company contributing to such a field. | DoD or other national security agencies confirm which fields, research, or contributions would advance national security. | Exempt from worldwide limits and per country caps by revision to 201(b)(1), up to cap of 200 principals annually | Administration ask from OMB to Congress May 2022 on NDAA for FY23 after review by the interagency of DoD’s proposal. Never voted on in either House or Senate. |
NDAA amendment Advanced STEM degrees 117th Lofgren amendment to include revised version of Section 80303 from America COMPETES | STEM PhD awarded by research universities in U.S. or abroad, in field relevant to critical industry or a critical and emerging technology, with fields list as identified by the interagency in developing general provisions for Russian scientists in the President’s Emergency Supplemental Assistance to Ukraine package sent to the Hill April 2022 (see p. 33) | Must have approved EB1 or EB2 petition under current law, reserved for advanced degree professionals including outstanding researchers or professors or those working in endeavors with substantial merit in the national interest. Issuing institution must offer research-intensive education as evidenced by at least $25M annual R&D investment with special provisions for MSIs or HBCUs. Field limitation for degree and work tied to national security. | Exempt from worldwide limits and per country caps by revision to 201(b)(1) | Bipartisan amendment found not in order by House Rules Committee for floor action on NDAA for FY23 because not budget neutral (and Ways & Means rejected filing fees to cover costs). Never voted on. |
Standalone bill needs vehicle, S.2384 118th Keep STEM Talent Act | STEM Masters or PhD from any U.S. university, in traditional STEM disciplines. (by CIP code). | Must have approved Permanent Employment Certification from DOL (excludes EB2 working in the national interest and all EB1). Must receive salary in excess of occupational median (excludes many early career STEM experts). | Exempt from worldwide limits and per country caps by revision to 201(b)(1) | Bill has been introduced in the 116th, 117th, and 118th by Senator Durbin, with companion House bills, bipartisan in 118th with Sen. Rounds. Never voted on in either House or Senate. |
NDAA amendment Defense researchers 118th Amendment adding Fortifying Our Research Through Rigorous Evaluation and Scholar Screening Act’’ or the ‘‘FORTRESS Act’.’ the President’s Emergency Supplemental Assistance to Ukraine package sent to the Hill April 2022 (see p. 33). | STEM PhD or six years employment related to a field “critical to national security,” with fields listed similar to proposal on Russian scientists in the President’s Emergency Supplemental Assistance to Ukraine package sent to the Hill April 2022 (see p. 33). | Must satisfy new screening and vetting requirements. Must be citizen of a FVEY, QUAD, or NATO country. Must have certification from DoD, Commerce, Energy, DNI, or NASA that employment is on a project funded or overseen by the agency, or show individual’s work in academia or industry is in a field critical to national security. | Exempt from worldwide limits and per country caps up to a limit of 5,000 principals annually. | Amendment for armed service committees markup of FY25 NDAA. |
How NEPA Will Tax Clean Energy
Introduction
Will the National Environmental Policy Act (NEPA) hinder the clean energy revolution? Data on this question is sparse, which has led to significant disagreements about the answer. To understand how NEPA affects clean energy, we must first uncover NEPA’s so-called “dark matter” effects — the law’s downstream costs that distort markets, create uncertainty for developers, and undermine state capacity. By investigating these costs, this paper adds important missing context to the limited data on NEPA.[ref 1] [ref 2] A holistic evaluation of the available data supports a concrete conclusion: As we transition our energy system, NEPA is likely to create larger drags on clean energy than on fossil fuels. Therefore, reforming NEPA to accelerate the development of energy infrastructure will disproportionately benefit clean energy.
This piece is organized into three sections. The first takes stock of the true costs of NEPA by identifying the dark matter of NEPA. Specifically, it assesses how NEPA creates uncertainty for developers and weakens agency capacity. The second section looks at the available evidence for how the law affects clean vs. fossil energy production. The final section considers the potential for reforming NEPA, and makes the case that clean energy stands to benefit more than fossil fuels.
The real costs of NEPA
The National Environmental Policy Act (NEPA) draws headlines for the delays to projects it causes and for the lengthy page counts of its documents. But NEPA’s true costs are more opaque: The environmental review process creates downstream burdens that distort markets, overburden state capacity, and leave an invisible graveyard of infrastructure projects. These effects are difficult to quantify. Indeed, some have referred to them as the dark matter of environmental law — effects we know exist, but have little data for.[ref 3] Accounting for these effects is crucial for informing future policy reforms. Defenses of NEPA often rely on glossing over the law’s downstream costs. But if we take a closer look, we can see how these “dark matter” effects harm clean energy compared to fossil fuels.
NEPA imposes two main forms of “dark matter” costs: (1) the process creates uncertainty for developers, leading to an invisible graveyard of projects that were never built; and (2) the process burdens federal agencies, wasting agency capacity, undermining agency authority, and preventing needed government action.
NEPA requires the government to conduct a detailed review of the environmental impacts of all major federal actions.[ref 4] However, what counts as sufficiently detailed is highly ambiguous and left up to case law to define.[ref 5] This creates an enormous vulnerability — the Administrative Procedure Act allows plaintiffs to challenge any NEPA review on the grounds that its documents were not sufficiently detailed.[ref 6] If plaintiffs succeed in showing the review was arbitrary or capricious in any way, the entire approval is upended. Project opponents have exploited this vulnerability, wielding procedural rules to challenge agencies and delay projects.
Uncertainty for developers
The NEPA process ties a project’s future, and its developer’s financial success, to a process that is vulnerable bureaucratically, politically, and legally.
These uncertainties create significant tail risk — the potential for extremely bad outcomes. Project developers must consider the possibility they might end up in a nightmare scenario, where permitting delays drag on for 10+ years, or where their project suffers through endless rounds of litigation before being finally canceled. Companies can rack up enormous costs and potentially lose investments in these scenarios. For developers, this tail risk makes it far more difficult to commit to developing projects with federal support that would trigger NEPA. Because any given project may hit a perfect storm of litigation and delays, the number of projects that can be developed is far lower than it would be otherwise. The result is an “invisible graveyard” of projects which are never developed in the first place.
Timeline uncertainty
Developers can’t know ahead of time how long a NEPA review will take. The median environmental impact statement, or EIS — the longest type of NEPA review — takes 3.5 years, but the average is 4.5 years. A quarter of EISs take more than 6 years.[ref 7] This long tail of NEPA delays affects developer and investor calculations: Developers cannot confidently predict when they will need to raise financing, and investors cannot know when their investments will begin making a return. Longer wait times also make it harder to account for the uncertainty associated with important market factors like demand, the cost of capital, or supply chains.[ref 8]
Political uncertainty
NEPA reviews create an avenue for political interference. While there is no official authority to deny projects under NEPA, political appointees can cause years of delay by shelving the review, requesting further study, or otherwise leaving the process in bureaucratic limbo.[ref 9] For example, New York City’s congestion pricing plan was held up by the Trump administration under the pretense that the Federal Highway Administration hadn’t decided whether the project needed an environmental assessment (EA) or EIS.[ref 10]
Litigation uncertainty
Litigation is a major pain point for developers. Even frivolous lawsuits are costly to defend, and NEPA lawsuits often happen during construction, halting projects for months or years. To make matters worse, lawsuits can happen at any time and there is no limit on the number of lawsuits that can be brought. Lawsuits can interrupt construction via preliminary injunctions — stays on construction while a court date is set — or by getting a project’s NEPA approval vacated. If this happens, the review is sent back to the federal agency to conduct supplemental review. This process can take several more years, creating another round of uncertainty where agencies and political appointees can drag their feet.[ref 11]
A perfect storm of tail risk
In the worst cases, all three forms of uncertainty come together in one project. For instance, the offshore wind project Cape Wind took eight years (2001-2009) to permit.[ref 12] Political opposition to the project exacerbated the timeline uncertainty of the NEPA process. For example, the late Senator Ted Kennedy, whose beachfront property overlooked the project site, declared that the project had done “insufficient environmental review.” He urged the project to wait while further reviews were completed — a classic obstruction tactic that likely contributed to the project’s exceptionally long review timeline.[ref 13] Then, once Cape Wind was finally permitted, project opponents, backed by wealthy NIMBY beachfront property owners, abused environmental permitting through litigation in a cynical strategy to further “delay, delay, delay.” Even though the developer won 31 out of 32 lawsuits, the prolonged litigation accomplished its true goal of inflicting a financially painful delay.
After six years of legal obstruction (2009-2015), Cape Wind was canceled when investors burned out and regional utilities pulled their power purchase agreements.[ref 14] The project developer lost $100 million of his own money and was left with nothing to show for 16 years of work. NEPA’s defenders have been quick to point out that outcomes like Cape Wind are not the most common result of the environmental review process. But they miss that developers and investors can’t know in advance whether they’ll experience a Cape Wind-style nightmare. Their calculations are affected accordingly.
Headlines focus on projects stuck in long reviews and litigation. But the bigger cost is the invisible graveyard of projects that never get built. Paying close attention to the cost of the invisible graveyard is especially important for the clean energy transition; if we move forward without reform, the invisible graveyard may consume many important projects that are needed to meet our decarbonization targets.
The largest effect of the invisible graveyard is that developers never even apply for a permit for projects, because they don’t see a path through the NEPA process or wish to avoid it entirely. Similarly, projects can be watered down by NEPA — either directly, by the need to duck litigation, or indirectly, by forcing developers to avoid triggers for NEPA review, like building on federal lands.[ref 15] But since these decisions are made behind closed doors, it is difficult to measure how many projects are prevented, and to what degree the NEPA process caused their cancellation.
What we do have are examples of projects that died or were heavily delayed in the NEPA process.
Agency costs
The second major cost of NEPA is its downstream effects on federal agencies. NEPA puts federal agencies on the hook for producing environmental reviews. But the legal standards for reviews are a moving target. Court rulings and regulatory accretion have continually expanded the standards for NEPA documents. Agencies are highly averse to the possibility of being sued in court and told by a judge their review was insufficient. Consequently, federal agencies have practiced “litigation proofing” on their documents — attempting to preempt lawsuits by going above and beyond the requirements set by case law.[ref 16] The result has been a massive expansion in page numbers and detail, from a handful of pages in 1970 to 1,703 pages in 2018. One former EPA general counsel speculated that as many as 90% of the details in environmental reviews are only included to ward off litigation.[ref 17]
Lost state capacity
Lost agency capacity isn’t just about extra pages and added delay. Staff time spent writing hundreds of extra pages to fend off lawsuits could certainly be better spent on other goals, like project planning or community engagement. But the biggest cost is in agency actions that get watered down or never happen because the prospect of completing a NEPA review is too daunting. Far from empowering civil servants, the NEPA process often multiplies the costs of other complex bureaucratic and political processes. When legal and bureaucratic costs rise too high, agency authorities are undermined, actions are watered down, and decision-making shifts to avoiding legal risks.
Lost agency authority
NEPA’s tax on building new things extends to agency authorities. Following widespread blackouts in 2003, Congress acted to improve grid reliability by amending the Federal Power Act and giving the Department of Energy authority to designate National Interest Electric Transmission Corridors (NIETCs). Once designated, NIETCs would create regions with streamlined processes for siting and permitting transmission lines. Moreover, the corridors would grant the Federal Energy Regulatory Commission (FERC) backstop authority — if states refused to approve lines on their own, then FERC could step in to issue building permits for projects in the national interest. The Department of Energy (DOE) acted by proposing two large corridors in 2007.
But these efforts were undermined by lawsuits from environmental groups.[ref 18] In 2011, a court ruled that the action of designating a corridor was itself a “federal action” under NEPA, meaning DOE would have to prepare a NEPA EIS just to designate the corridor.[ref 19] At the direction of Congress, NIETCs were supposed to cover vast regions of the United States, thereby providing developers and states with options for building new lines. But a NEPA review for a massive corridor with innumerable alternatives would require an infeasible level of detail.[ref 20] The practical impossibility of conducting such a NEPA review was added to the challenges of a 2009 ruling, which ruled that FERC’s backstop authority was limited to cases where states “withheld” approval, meaning FERC could not backstop lines if states actively “denied” approval.[ref 21]
The double blow of burdensome NEPA reviews and watered-down backstop authority led to DOE dropping NIETC designation altogether. In other words, NEPA’s overburdensome process is a key reason FERC doesn’t have federal backstop authority for transmission lines.
In 2021, Congress finally fixed the backstop authority loophole, including state “denials” alongside “withheld approval” as cases where FERC maintained backstop authority, but left the NEPA problem unfixed. To get around the infeasibility of a NEPA review for a massive corridor, DOE has decided to instead designate a handful of significantly smaller corridors.[ref 22] Shrinking the corridors — and with them, FERC’s authority — is the only way DOE stands a chance of getting the corridors approved under NEPA.
In the past, we’ve described NEPA as a tax on building new things. But the NIETC case study highlights another form of NEPA’s costs: its deleterious effect on state capacity. Even an ambitious act of Congress granting sweeping authority to DOE and FERC was undercut by environmental review. NEPA creates an invisible graveyard of unbuilt projects, but it also quietly destroys agency authority and capacity.
Litigation undermines decision-making
Litigation’s primacy in the NEPA process damages the law’s original purpose: promoting sound, environmentally conscious decision-making. As the NEPA process has become oriented around avoiding litigation, the decision-making process that NEPA reviews are supposed to inform has become less about environmental tradeoffs and more about legal risk management. A survey of U.S. Forest Service personnel showed that “likelihood of litigation,” “degree of public controversy,” and “degree of political attention” factored into NEPA decisions more than environmental impacts.[ref 23] The need for legal risk management takes project decisions out of the hands of officials with the greatest knowledge of a project’s details. Moreover, the obsession with avoiding litigation makes NEPA documents longer and more technocratic — and thereby less readable and less informative to the public.[ref 24]
Worst of all, litigation aversion undermines the original point of NEPA, which was to use environmental reviews to inform government decisions. Instead, it is an open secret that agencies and developers enter the NEPA process with a small range of pre-chosen project designs.[ref 25] Pre-planning reduces the number of project alternatives that must be considered in the review, shortening review delays and reducing the litigation attack surface.[ref 26] This is a pragmatic strategy from agencies, but it creates an absurd irony: Litigation makes the details of environmental impacts — and the years and pages needed to document them — virtually irrelevant to the actual decision-making process.
Making planning worse
NEPA also warps the political incentives for infrastructure projects, worsening planning. Megaprojects in dense cities are naturally controversial and extraordinarily expensive. To secure funding and negotiate policy design with stakeholders and city residents, politicians need flexibility. But the NEPA process locks in a strict, regimented process that forecloses political negotiations and alternatives. Planners are disincentivized to change plans once the NEPA review has begun, even if communities want the changes; changing course outside the prepared alternatives could result in having to restart the review, adding years of additional delay. Additionally, the lag time of environmental review opens opportunities for project opponents to mobilize opposition, file obstructionist lawsuits, and wait out elected officials who champion the project.
The near-40-year saga of New York’s Tappan Zee Bridge replacement highlights how NEPA gums up the already challenging process of delivering major transportation projects. By 1980, New York State’s Department of Transportation (NYS DOT) had begun considering how to alleviate congestion on I-287 and renovate the existing Tappan Zee Bridge. But construction would not begin for 33 years, decades late and billions of dollars over budget. The NEPA review was by no means the only reason the project’s delivery was such a mess.[ref 27] But it did create lengthy delays and locked-in plans, creating time for the project’s opponents to mobilize. Tellingly, what finally got the project through was Governor Andrew Cuomo’s strategy to ignore NEPA’s goals, minimize public input, pre-select a preferred design, and aggressively head off opposition.
Environmental review began undermining plans in 1989, when NYS DOT reviewed a proposal to alleviate congestion by adding high occupancy vehicle (HOV) lanes to the existing Tappan Zee Bridge. The review took much longer than expected because of poor interagency coordination and because federal officials were unfamiliar with suburb-to-suburb HOV lanes.[ref 28] This is another challenge that NEPA reviews add: multiple agencies need to coordinate, and if one drops the ball, the entire process can be delayed. Perversely, NYS DOT’s attempts to save money and minimize the need for property-taking by narrowing the width of the HOV lanes meant the design proposed non-standard lane widths, frustrating federal officials and creating novel details to review.[ref 29] By 1995, after five years of considering alternatives and adding details to the project’s scope, the cost of the project had risen dramatically from an estimated $208 million in 1989 to $365 million.[ref 30] But here, again, the NEPA process undermined good decision-making. Because the NYS DOT commissioner had signed off on the plans back in 1989, canceling them with nothing but a long review to show for it would be politically embarrassing. And the EIS review, which was almost complete, locked in plans for an expensive HOV lane with no cheaper alternative.[ref 31]
The long review had done even more damage by giving project opponents time to mobilize. By 1995, environmentalists had set up a series of efforts to undermine public support for the project.[ref 32] The opposition launched a sustained public engagement campaign, which NYS DOT planners were unequipped to counter.[ref 33] Notably, public complaints came from transit advocates as well as highway opponents. Through political spin, much of the public came to believe that NYS DOT could accomplish much better projects at a lower cost.
The Tappan Zee saga highlights a key problem with public engagement in the NEPA process: intense, drawn-out scrutiny of a project’s costs inevitably sours public opinion. The HOV lanes alternative was ultimately scrapped in 1997, after 15 years of planning and 6 years of NEPA review. Instead of executing a simple upgrade for a reasonable price, the NEPA process — along with fragmented planning and adversarial politics — dragged down delivery until the project was both unpopular and financially inefficient. New York State ultimately lost millions on review costs and $200 million in federal funding allocated to the project.
In 2002, plans to renovate the I-287 corridor and replace the Tappan Zee Bridge entirely began to move through environmental review. But planners again fell into the trap of trying to earnestly plan during the NEPA review process.[ref 34] Political disagreements over project scope combined with NEPA requirements to perpetually increase the alternatives and details needed for review.[ref 35] Perversely, sensitivity to environmental considerations meant planners had to do more review to consider further alternatives.[ref 36] Stakeholders used the threat of future litigation to force consideration of their preferred alternatives.[ref 37] By the time the review was nearly complete, some of the earlier analysis was out of date and had to be redone.[ref 38] The decade-long, 10,000-page Draft Environmental Impact Statement was completed in 2011, prompting one MTA planner to call for reforming environmental review: “It takes so freakin’ long to do anything… There must be a better and easier way for a thirty-mile corridor.”[ref 39]
Governor Andrew Cuomo’s success in 2011 shows NEPA’s costs from a different angle. To get the bridge through NEPA, Cuomo embraced the opposite of the law’s intended goals. Instead of considering public input, Cuomo made decisions behind closed doors, cut out most stakeholders, and expedited a pre-chosen alternative through the NEPA process with minimal public engagement.[ref 40] Cuomo headed off opposition from frustrated transit proponents with political spin designed to suggest the bridge would still include support for transit, and hired a local news anchor to serve as his liaison. Cuomo also directed the state to find a contractor before the EIS was even completed — further mocking the idea that reviews are supposed to inform decision-makers.[ref 41] Cuomo’s strategy may have been brash, but it was also pragmatic. Cuomo was responding to the perverse incentives created by the NEPA process. NEPA was supposed to encourage decision-makers to consider environmental impacts and community input. But to get through the legal gauntlet that has grown around NEPA, decision-makers are now incentivized to pre-choose preferred alternatives and treat public input as a legal risk to be mitigated.[ref 42]
In many cases, the NEPA process unwittingly makes real community engagement harder. Environmental review mandates a long drawn-out process of review and engagement, airing out all the downsides of a project, inviting stakeholders to give inevitably conflicting input, and leaving plenty of time for opposition to mobilize. But at the same time, the NEPA process prevents changes in project design outside the scope of addressed alternatives. If projects want to incorporate stakeholder input, planners either have to pick an already reviewed alternative or start the entire review process over again.
An invisible graveyard of agency actions
Invisible graveyard effects are, by definition, hard to quantify. It’s difficult to visualize unbuilt infrastructure, or actions left untaken by agencies. But one tangible example is wildfire prevention: while inaction on infrastructure permitting leads to an absence of building, inaction on wildfire prevention actively causes bigger, more costly wildfires.
Decades of mistreatment have resulted in a backlog of American forests that need to be treated to prevent wildfires.[ref 43] Risks of wildfire are exacerbated by the effects of climate change.[ref 44] Research estimates as many as 80 million acres of forest need treatment. In 2022, the U.S. Forest Service (USFS) committed to treating 50 million acres over 10 years. But increasing treatment from roughly 2 million to 5 million acres per year will require completing significantly more NEPA reviews, which delays forest treatment.[ref 45]
Prescribed burns are small, intentional fires used to burn underbrush and remove built-up fuels. Despite a scientific consensus that these fires benefit forests and protect against wildfire, the activity remains controversial and is subject to regular litigation from environmental groups.[ref 46] These misguided lawsuits are often brought by regional conservation groups intent on preserving a particular forest or endangered species from disruption.[ref 47] Litigation causes further court delays and increases the review times by incentivizing the USFS to prefer EISs in order to prevent lawsuits.[ref 48] Litigation also eats up valuable staff time that would be better used planning and implementing forest treatment.[ref 49] Having to wait 7.2 years to begin a major prescribed burns treatment simply won’t get us to treating 50 million acres in 10 years.[ref 50]
The cost of untaken actions and unsolved problems rarely grab headlines. But the consequences of inaction are just as real. If we fail to hit our goals for preventing forest fires or transitioning to a clean energy economy, the human and environmental costs will be just as real as the side effects of human activity. As Senator James Lankford points out: “wildfires don’t wait on NEPA approval.”[ref 51] For that matter, neither will climate change.
So far, we’ve discussed the harms caused by NEPA. But it’s not clear from those costs alone that NEPA will harm the clean energy transition. Perhaps, as some observers allege, NEPA restricts fossil fuels more than clean energy, thereby benefiting the clean energy transition. We believe the evidence does not show that. Instead, there is persuasive evidence for the opposite conclusion: NEPA will harm clean energy more than fossil fuels, and will be a significant drag on the clean energy transition.
Clean vs. fossil energy
Clean energy and fossil fuels are significantly different industries, so it’s no surprise that NEPA affects the two differently. Fossil companies produce commodities that are sold in a global market and transportable via pipelines, ships, trucks, and trains. By contrast, clean energy companies produce electricity that must be moved through transmission lines and sold in heavily regulated markets. And even though renewable energy is clean, it often requires more land, meaning it triggers more stringent NEPA reviews. Most fossil production is done through small projects, usually no more than a few acres per well pad.[ref 52] Utility-scale solar and wind, however, require far more land disturbance, both per megawatt produced and per project.[ref 53] Wind and solar have a practical need to build large projects to reach economies of scale and justify the cost of grid interconnection. So it’s no surprise that NEPA affects fossil fuels and clean energy differently — it would be surprising if they were treated equally by the same statute.
As newcomers to a complex regulatory environment, clean energy companies have to start at the beginning of the “regulatory cost curve” — the process of learning how to streamline the cost of regulation. Over time, industries and regulators learn how to work with each other; companies learn what regulators need, regulators learn about the technical specifics of industry, and both sides learn the limits of statute and regulation. Industries also manage to negotiate streamlining by lobbying policymakers. The fossil fuel lobby has achieved this: FERC sites fossil fuel pipelines, and the Energy Policy Act of 2005 created a categorical exclusion (CE) for oil and gas exploration and some types of production wells.[ref 54]
Besides the major statutory differences, the oil and gas industries have achieved significant soft streamlining through years of pressure and repetition. Take the treatment of fossil fuel exploration compared to geothermal: the 2005 Energy Policy Act categorically excludes oil and gas exploration from NEPA, whereas geothermal energy can require up to seven reviews, including for basic exploration.[ref 55] Geothermal leases are processed less frequently, both because oil and gas persistently pressure policymakers to ensure the Bureau of Land Management (BLM) prioritizes oil and gas leases, but also because BLM field staff have gone through the oil and gas leasing process many times.[ref 56] This is part of the regulatory cost of being a new industry: regulators have to learn new technical specifics and iron out questions of legal responsibility. In practice, that means delays while agencies hire new specialists, and uncertainty while agencies determine how to comply with unclear legal requirements. The threat of litigation in the NEPA process exacerbates these difficulties.
Data shows NEPA disproportionately harms clean energy
Common-sense conceptual arguments suggest NEPA is a bigger problem for clean energy than fossil fuels. But what does the data tell us? Recent data from Michael Bennon and Devon Wilson at Stanford University shows that, between 2010 and 2018, far more EISs were completed for energy projects than for fossil fuels.[ref 57] 60% of energy EISs were for clean energy projects, while only 24% were for fossil fuels.[ref 58] That data is backed up by a count IFP conducted of current federal permitting trackers: 62% of ongoing energy EISs are for clean energy projects, while only 16% are for fossil fuel projects.[ref 59] Despite being better for the environment, clean energy faces higher scrutiny in the environmental review process.
This data also shows the regulatory cost curve in action. The reason there are so few fossil projects in the EIS data is that the overwhelming majority of fossil projects cleared through NEPA go through streamlined EAs and CEs. For ongoing fossil projects, the BLM register shows 5 EISs compared to 211 EAs and 76 CEs. For clean energy projects, the register shows 19 EISs and 9 EAs.[ref 60] Clean energy has not had decades to streamline the regulatory process and faces significant challenges given the size of utility-scale clean energy projects.
Clean energy projects also face higher rates of litigation under NEPA and cancellation compared to fossil fuels. 64% of solar EISs faced litigation, even greater than pipelines at 56%.[ref 61] Both solar and wind projects faced higher rates of litigation than fossil production projects and were canceled at a higher rate than either pipelines or fossil.[ref 62]
How will NEPA affect the clean energy revolution?
Whether NEPA will harm clean energy more than fossil fuels is distinct from the question of what NEPA’s total costs on clean energy are. After all, it might be that even though NEPA harms clean energy more than fossil fuels, it harms both relatively little and, therefore, reform is not pressing. This worry is legitimate and deserves consideration. A recent dataset shows that only 5% of utility-scale wind and solar projects require an EIS under NEPA, suggesting that NEPA may have small total effects on the clean energy industry.[ref 63] However, what this statistic misses is how the dark matter effects of NEPA create distortions and strong selection effects via “jurisdiction shopping.” Moreover, what the skeptical argument misses is that the relatively low rates of NEPA review cannot continue as the clean energy transition ramps up.
The 5% statistic can be explained as a selection effect. Similar to “venue shopping” in the legal system, “jurisdiction shopping” is the practice of choosing a project location to find a friendly regulatory environment. The high costs of federal permitting, for which NEPA review is the largest burden, incentivizes developers to find locations that avoid triggers for NEPA review. This leads to a strong selection effect, where the costs of NEPA cause developers to avoid developments that would invoke the law.
Take utility-scale solar developments as an example. These projects consistently avoid federal lands and the associated NEPA reviews, despite federally managed lands covering much of America’s best solar resources. For example, Nevada, Arizona, and New Mexico have some of the best solar resources in the country, provide friendly tax incentives, and proactively prevent local prohibition. Both New Mexico and Arizona scored highly for ease of grid interconnection.[ref 64] But all three states have relatively little solar development and solar developers seem to dodge federal lands as much as possible.[ref 65] While many factors contribute to selecting project locations, the evidence suggests that federal permitting — for which NEPA is the tip of the spear — plays a significant role.[ref 66]
Going forward, jurisdiction shopping to avoid NEPA will likely be more difficult for several reasons. First, as wind and solar move from a marginal industry to the core source of generation, projects will have to be built across a variety of regions, including in jurisdictions with natural NEPA triggers. Moreover, developers and utilities may need to prioritize other concerns, like proximity to transmission infrastructure or proximity to consumers, further reducing flexibility to shop for favorable regulatory jurisdictions. Second, going forward, new technologies and new policies will create new NEPA triggers. This has already started as offshore wind and next-gen geothermal — which have a natural nexus for federal review — have become viable.[ref 67] Additionally, federal policy to support clean energy will create new triggers. For example, loans from the Loans Program Office (LPO) — one of the IRA’s largest initiatives — create a nexus for federal permitting.[ref 68] Federal backstop for transmission lines, whether from future transmission reform or from backstop authority in the new NIETC corridors, likewise will trigger NEPA.
Finally, going forward solar, wind, and transmission lines will get larger, making them more likely to overlap with federal lands and other trigger points for NEPA review.[ref 69] As projects get larger and requirements increase — for proximity to power markets, proximity to existing infrastructure, and so on — the number of viable locations decreases rapidly. On top of this, the combination of state, local, and federal regulations can make it impossible for projects to avoid every burdensome permit.[ref 70] This reflects the simple fact that large infrastructure projects are naturally controversial; it is unrealistic to expect developers to receive universal support.
Take the Cardinal-Hickory Creek transmission line — a proposed Wisconsin-Iowa line that would connect 161 clean energy projects (~25 GW) to the grid. The final few miles of the transmission line have to go through a wildlife conservatory and, despite developing a robust mitigation plan, the project has been sued and halted multiple times. Environmentalists claim the project could have avoided the conservation area with an alternative route. But in actuality, the project chose the selected route to avoid historic preservation issues; the route was least controversial of all the financially viable options. Financial and regulatory constraints often make it totally infeasible to avoid federal permitting triggers, especially for large projects. The data backs this up. While only 3.5% of transmission line projects require an EIS, they account for 26% of total miles of new transmission.[ref 71]
Most importantly, the scale of new energy projects needed to meet our clean energy goals is inconsistent with the current pace of permitting. Between 2010 and 2018, the period that Bennon and Wilson’s research covers, clean energy projects already made up 60% of energy-related EISs, but the U.S. only added 112 GW of renewable energy (14 GW per year). Solar only increased from 0.2% of U.S. electricity production to 4.3%, while wind increased from 3.5% to 7.3%.[ref 72] This pales in comparison to our 2050 targets of ~3,273 GW of clean energy generation and storage, which will require adding roughly 94 GW of new capacity per year.[ref 73] Our permitting process, as currently construed, will not allow us to hit these goals. Even with an administration friendly to clean energy, it took more than three years to permit 25 GW of clean energy.[ref 74] Clean energy is already being harmed, and, as the pace of deployment accelerates, these problems will become even more acute.
In a complex, adversarial system, it is difficult to thoroughly model NEPA’s costs. But here’s what we do know: When clean energy projects go through NEPA, they are disproportionately harmed. Large project footprint requirements make it unlikely that clean energy will escape this problem in the future. We know that solar projects have largely avoided developing on federal land, but the clean energy industry will struggle to “jurisdiction shop” with offshore wind, large transmission lines, geothermal energy, and increasingly large wind and solar farms. We know that federal triggers are likely to increase as the federal government gets more involved with energy policy. We know that the pace of permitting is nowhere near where it needs to be to hit our clean energy goals. And we know the core driver of NEPA delays is litigation.
Permitting reform
In the wake of the Inflation Reduction Act (IRA), Congress is considering reforming the federal permitting process. NEPA reforms are front and center in these discussions. Given the need to strike a bipartisan deal, policymakers will need to find technology-neutral solutions to reform NEPA across the board. Additionally, potential NEPA reforms will need to strike a pragmatic balance between preserving community input and streamlining NEPA to remove obstructionist litigation.
Technology-neutral reforms to NEPA would benefit clean energy more than fossil fuels for a few simple reasons. First, lowering NEPA’s tax on building new things will naturally benefit the developers that need to build more. The fossil fuel industry has already built much of the infrastructure it needs. But the clean energy transition is just getting started, and the U.S. has to build a staggering amount of new infrastructure.[ref 75] Second, the fossil fuel industry has come down the regulatory cost curve, while clean energy is still at the beginning. Reforming NEPA to add more certainty and speed up reviews will benefit the industries that have more regulatory uncertainty to overcome. Third, clean energy projects are naturally disadvantaged in the NEPA process due to their large land footprints and need for transmission lines. Clean energy projects are not likely to be as successful as the fossil fuel industry at gaining specific carve-outs and reducing NEPA’s costs.
Policymakers should reconceptualize NEPA litigation from a veto point to a check against lax reviews. Judicial reforms should aim to reduce timeline uncertainty, litigation uncertainty, and tail risk. Reform should go far enough to provide space for federal agencies to conduct shorter reviews and make decisions in the public interest. Two promising options are for Congress to set a time limit on the use of judicial injunctions or give agencies more discretion over which details to include in environmental review documents. Time-limiting injunctions would allow for a period of litigation but would set an end date for delays to project construction. To increase agency discretion, Congress could set a requirement that plaintiffs challenging NEPA documents must affirmatively prove that the error in question would have changed the agency’s final decision. This would create space for agencies and set a fair policy standard: NEPA decisions should not be upended for small mistakes but agencies should be held accountable if they make major errors.
Reforms should also protect engagement with communities. Congress should proactively support the community engagement process by increasing the time available for public comment during the draft phase of the NEPA documents, thereby giving planners a chance to see input early in the process.
NEPA’s defenders have done their best to downplay the harms of environmental review. But the law’s costs will only increase as efforts to spur the clean energy transition accelerate. Permitting reform to fix NEPA litigation is one of the most practical and highest-leverage solutions to accelerate clean energy deployment in the United States.
Bolstering STEM Talent with the National Interest Waiver
Updates to the National Interest Waiver process
For decades, it has not been clear how immigrant experts with advanced STEM degrees could qualify for a National Interest Waiver (NIW) in the permanent residency process. Guidance from U.S. Citizenship and Immigration Services (USCIS) published in January 2022 has helped to resolve this uncertainty.[ref 1] For an eligible immigrant, the NIW can now serve as the most efficient way to secure petition approval, allowing a future application for green card status.
This article:
- Explains the new 2022 guidance concerning the NIW for immigrants holding advanced STEM degrees, including the policy imperative for the guidance,
- Lays out an updated approach to NIW petitions that immigration lawyers have found successful
- Assesses outcomes of the NIW guidance so far.
Technology competition and the National Interest Waiver
The NIW has its origin in the Immigration Act of 1990. The Act created a new Employment-Based Second Preference (EB-2) for green cards and the EB-2 sub-category of advanced-degree professionals, or individuals with exceptional ability, who might be granted a waiver of the normal requirement of a job offer and a Labor Department-approved Application for Permanent Employment Certification (PERM) when “in the national interest.”[ref 2] As appropriate for departments and agencies implementing such legislation, legacy INS (Immigration and Naturalization Service, part of the Department of Justice) and, since 2003, USCIS (which is part of the Department of Homeland Security) have identified the contours of the NIW provision by promulgating notice and comment regulations and by issuing precedent decisions through agency adjudications.[ref 3]
The 1990 Act was passed when the U.S. population was three-quarters of its current size, when the real GDP of the U.S. economy was half of what it is today, and before the “STEM” acronym became a standard reference at the National Science Foundation in the early 2000s. When the NIW concept was created, policymakers were not focused on clarifying the circumstances in which scientists, technologists, and engineers with advanced degrees were working in the national interest. The 2022 policy guidance resolves that blind spot by clarifying existing binding regulations and controlling precedent implementing the NIW statute. The new guidance creates increased consistency in adjudicating immigration petitions on behalf of international STEM experts who are now critical players in our economic and technology competition.
The 2022 Policy Manual update did not expand the scope of the NIW. Instead, it simply explained to agency adjudicators, STEM experts, and their employers how the NIW applies to advanced STEM degree holders in critical and emerging technology fields. These degree holders are among the risk-takers the United States has long welcomed as immigrants[ref 4] looking for the best and fastest returns on their own human capital investment.[ref 5] There is no workforce where these dynamics are more important than in the sciences and engineering. Since World War II, it has been clear that this workforce is fundamental to invention and technological adoption, and therefore critical to a nation’s security as well as to growth in opportunity and productivity.[ref 6]
The 2022 clarifying guidance may be viewed as an attempt to address this reality: American technological leadership relies on STEM experts making contributions to research and development (R&D) across government, academia, and industry. R&D benefits from collaboration across sectors and within clusters, where organizations can cross-pollinate one another by exchanging ideas, applications, and talent. The United States excels in this cross-pollination. On a per-capita basis, the United States still leads the world in the number of high-intensity science and technology clusters, those regions with high science and technology employment.[ref 7] Moreover, the U.S. STEM R&D ecosystem is constantly evolving — for instance, industry’s contribution to STEM R&D has grown enormously, from 44% in 1953[ref 8] to over 73% in 2024.[ref 9] America’s innovation ecosystem is healthiest when STEM experts have the flexibility to follow science wherever it takes them. For a foreign-born STEM expert, that flexibility can be afforded by an approved NIW EB-2 petition.
In some ways, the 2022 NIW clarifications in the USCIS Policy Manual reflect the national interests at play in the growing strategic competition between the United States and China. Many national security experts believe the United States can only win this competition by securing and maintaining a technological lead, and that this can only be done by effectively tapping into the global supply of STEM talent.[ref 10] China already produces twice as many STEM master’s graduates as the United States, and will soon produce twice as many STEM PhD graduates. China has recently surpassed the United States in total science and technology activity according to the Global Innovation Index.[ref 11] With a bigger population and more STEM experts, China could outcompete the U.S. STEM ecosystem.[ref 12]
The new guidance was intended to provide new transparency as to how USCIS officers adjudicate NIW requests related to U.S. science, technology, and engineering. As per Volume 6, Part F, Chapter 5, Part D, Section 2 of the Policy Manual — appropriately titled “Specific Evidentiary Considerations for Persons with Advanced Degrees in Science, Technology, Engineering, or Mathematics (STEM) Fields” — possession of an advanced degree related to a critical and emerging technology holds particular weight when reviewing each of the three prongs of an NIW analysis:
Specifically, the Policy Manual update clarifies evidentiary considerations that inform whether an NIW is appropriate for certain advanced STEM degree holders whose degrees relate to a “critical and emerging technology field,” as defined by the Executive Office of the President via either the National Science and Technology Council or the National Security Council.
The list of “Critical and Emerging Technologies” periodically updated by the Executive Office of the President[ref 14] includes 18 fields of particular importance (and more than 100 subfields), as of February 2024:
Fields of Particular Interest | |
---|---|
Advanced Computing | Advanced Engineering Materials |
Advanced Gas Turbine Engine Technologies | Advanced and Networked Sensing and Signature Management |
Advanced Manufacturing | Artificial Intelligence |
Biotechnologies | Clean Energy Generation and Storage |
Data Privacy, Data Security, and Cybersecurity Technologies | Directed Energy |
Highly Automated, Autonomous, and Uncrewed Systems (UxS), and Robotics | Human-Machine Interfaces |
Hypersonics | Integrated Communication and Networking Technologies |
Positioning, Navigation, and Timing (PNT) Technologies | Quantum Information and Enabling Technologies |
Semiconductors and Microelectronics | Space Technologies and Systems |
Furthermore, USCIS expanded Premium Processing in January 2023 to include NIW petitions, allowing petitioners to secure a decision within 45 business days for an extra fee, versus 10-12 months under normal adjudication times.
With this new NIW Policy Manual update, many highly-educated and accomplished foreign nationals in STEM fields now recognize they have a relatively hassle-free option to secure their futures in the United States. USCIS data released in January 2024 show a more than two-fold increase in the use of NIW by employers that have typically sponsored EB‑2 immigrants in STEM,[ref 15] suggesting that the NIW policy update has informed businesses if not educational institutions on green card strategies for highly valued employees. In fiscal year 2019, the last pre-pandemic year before DHS was developing the new guidance in 2021, there were 59,100 EB-2 petitions filed for beneficiaries in STEM, with 9,260 as NIW. In fiscal year 2023, the first full year after DHS announced the new guidance in January 2022, there were 53,960 EB-2 petition filings for beneficiaries in STEM, with 20,950 as NIW. As such, the proportion of STEM EB-2 NIW filings for STEM experts shifted from around 16% to 39% of the overall STEM EB-2 receipts.
A new approach to NIW petitions
Prior to the January 2022 Policy Manual update, most experienced immigration lawyers perceived little distinction between a petition for NIW EB-2 and a petition for Employment-Based First Preference Extraordinary Ability (EB‑1A). Even though the EB-1A category is meant to recognize individuals who have risen to the very top of their field — which has never been the standard for NIWs — practitioners most often inferred that USCIS adjudicators treated both petition categories interchangeably. As a result, most lawyers prepared both NIW and EB-1A filings with similar levels of supporting detail, using extensive evidence, lengthy letters of support from experts, and complex explanations about the foreign national’s work and significant accomplishments to convey the prominence of the foreign national in their field. The new guidance now clarifies the distinctions between the two categories. A beneficiary does not need to have already risen to the very top of their field to qualify for an NIW. Instead, beneficiaries must be in a position to advance an endeavor of substantial merit and national importance.
The new guidance suggests a new approach to constructing NIW petitions. This article’s co-authors in private practice — Jonathan Grode and Joshua Rolf — set out to test an updated approach to preparing and submitting NIW petitions for STEM masters and PhD graduates.
The attorneys prepared a test group of cases for candidates all holding STEM PhDs. Through filing a batch of cases with similar academic backgrounds, the co-authors hoped to understand whether a STEM PhD in a critical and emerging technology field would carry enough evidentiary weight with a brief description of the beneficiary’s research area, contributions, and employment to result in a positive outcome in most cases. Ultimately, each of these test cases for STEM PhDs was approved promptly, within 60 days before premium processing and two weeks after premium processing, and without Requests for Evidence (RFE).
Although this first group of cases prioritized NIW candidates with PhDs, this approach has been applied in the last year to master’s-level STEM experts working in areas vital to the national interest, especially areas where engineers play a critical role and the terminal degree is at the master’s level. Many individuals leading innovation and efforts to solve complex problems in critical and emerging technologies are engineers, who, when pursuing graduate study, opt for an engineering master’s degree instead of a graduate research degree that terminates with a doctorate, and then work in industry. Foreign nationals with impressive contributions who hold master’s degrees have succeeded in satisfying the three-prong[ref 16] NIW analysis due to their education and area of expertise.
Taken together, these results point to the enormous promise of NIW petitions for immigrants as a self-sponsored[ref 17] employment-based green card that is relatively speedy, inexpensive, and efficient. Another advantage of this increased efficiency is that the petition may be prepared in a matter of days or weeks and then adjudicated within weeks, securing an earlier priority date. The resulting time saved is significant, considering that an NIW could be an alternative to counting on the H-1B lottery[ref 18] for a bridge status to permanent residency. Moreover, PERM can take longer than 18 months to be completed, can present challenges when major industries have layoffs,[ref 19] and features delays in securing a priority date. Given the enormous green card backlog, this significantly affects application wait times for individual immigrants and their families.
NIW petitions present opportunities for employers as well. In a competitive job market, employers are increasingly looking for ways to attract and retain top-level STEM talent. They may look to the NIW as an attractive route for workers seeking a long-term solution to their immigration situation in the United States. In a STEM job market where a significant percentage of job applicants and employees are STEM-educated immigrants, immigration benefits can make the difference in attracting and retaining sought-after talent. The efficiency of the NIW process and the relative ease with which it can be used by applicants in future positions in their field make it an attractive part of an employer’s immigration program, especially if the employer builds a record of success for NIW workers who can reliably qualify for this benefit based on the merit, scope, and importance of their work with the employer, as well as the credentials that qualified them for the role in the first place.
Moreover, an employer offering NIW-qualifying work often may feature economies of scale: an employer can identify pockets of scientific work within its company or institution that are connected to the national interest. In this context, the employer is well-positioned to explain the work the employee (and their similarly-situated colleagues) has done within the industry to contribute to the advancement of its critical technological work, thereby enabling the employer to systematically prepare and apply for an NIW petition for people within that sector of the work. A comprehensive explanation of the work being performed and why it is in the national interest, as well as supporting documentation from company leaders and experts, can be used as supporting evidence in the petitions of all similarly situated applicants. While every petition must contain employee-specific information connecting the foreign national’s specific qualifications, the cross-applicability of information related to specific areas of technology can create efficiencies not typically available in other employment-based petitions. These efficiencies may include saving the significant monetary costs associated with the individualized PERM process for attorney’s fees, recruitment, and applicant review.
When considering a new approach to NIW petitions, practitioners should note that the NIW Policy Manual update specifically references other objective measures of when an endeavor is in the national interest, beyond the government’s “Critical and Emerging Technologies List,” such as when the endeavor is in an R&D-intensive industry, when it is a priority identified annually by the OMB and OSTP Directors in the President’s budget, or when an interested federal agency confirms its national importance.[ref 20]
Although the NIW Policy Manual update specifically reiterates the value of a PhD in the context of a national interest waiver, the guidance also recognizes that STEM master’s degree holders can qualify in certain situations. The authors are optimistic that NIW petitions for STEM master’s graduates will be predictably adjudicated favorably where the individual’s experience before petition filing is in a critical and emerging technology field, and strong evidence shows she is well-positioned to contribute to a specified area in such a field.
In this light, any such NIW program building should also avoid over-extending the classification to individuals with little beyond a STEM master’s degree or PhD on their resume, even if those degrees are in a critical and emerging field. Given the tendency of some beneficiaries, employers, and immigration practitioners to test the limits of new policy guidance, we may see more EB-2 denials in the NIW category, as a proportion of overall EB-2 denials, than in the past. For comparison, in 2019, USCIS denied 990 STEM EB-2 petitions, with 320 (32%) of those denials seeking NIW EB-2 classification. However, by fiscal year 2023, after the new guidance was issued and NIW classification petitions swelled, 90% of STEM EB-2 denials were for those seeking NIW classification (2,120 out of 2,400).[ref 21] The new guidance indicates a successful new approach can yield significant benefits for the right beneficiaries conducting the right kind of work. Nevertheless, the authors encourage caution. It remains just one tool in the toolbox for advanced STEM degree talent. It is not appropriate for all cases, and should not be treated as a universal remedy.
Whether working in a STEM field or otherwise, the NIW is only open for professionals with graduate degrees (or individuals who can document exceptional ability, or a bachelors plus five years of progressive experience in the field) who are also poised to advance particularized endeavors that can be characterized as in the national interest. In addition, as with all immigration adjudications, there is some level of subjectivity and discretion involved. There is therefore always the possibility of baffling or inconsistent decisions, even when new Policy Manual guidance attempts to flesh out details and examples. Of course, misapplication of this approach can backfire, as USCIS over-corrects through increased pushback via Requests for Evidence (RFEs) and denials.
Lessons for immigration law practitioners
The NIW Policy Manual update is an integral part of recent international STEM talent initiatives designed specifically to support U.S. economic and national security interests,[ref 22] which will continue to rely on emerging technologies. In light of the growing need for STEM talent, it provides STEM experts and their employers and counsel with clear, citable guidance. More employers and their counsel should offer the NIW approach to qualified advanced degree holders, while monitoring USCIS backlogs and the Visa Bulletin[ref 23] to continue making informed and strategic decisions for each case. More should also test the boundaries of NIW approach. But, like all good experimentation, it must be done with careful consideration to ensure consistent and scalable results.
Will We Ever Get Fusion Power?
Today all nuclear power reactors are driven by fission reactions, which release energy by splitting atoms apart. But there’s another nuclear reaction that’s potentially even more promising as an energy source: nuclear fusion. Unlike fission, fusion releases energy by combining atoms together. Fusion is what powers the sun and other stars, as well as the incredibly destructive hydrogen bomb.
It’s not hard to understand the appeal of using nuclear fusion as a source of energy. Unlike coal or gas, which rely on exhaustible sources of fuel extracted from the earth, fusion fuel is effectively limitless. A fusion reactor could theoretically be powered entirely by deuterium (an isotope of hydrogen with an extra neutron), and there’s enough deuterium in seawater to power the entire world at current rates of consumption for 26 billion years.
Fusion has many of the advantages of nuclear fission with many fewer drawbacks. Like fission, fusion only requires tiny amounts of fuel: Fusion fuel has an energy density (the amount of energy per unit mass) a million times higher than fossil fuels, and four times higher than nuclear fission. Like fission, fusion can produce carbon-free “baseload” electricity without the intermittency issues of wind or solar. But the waste produced by fusion is far less radioactive than fission, and the sort of “runaway” reactions that can result in a core meltdown in a fission-based reactor can’t happen in fusion. Because of its potential to provide effectively unlimited, clean energy, countries around the world have spent billions of dollars in the pursuit of fusion power. Designs for fusion reactors appeared as early as 1939, and were patented as early as 1946. The U.S. government began funding fusion power research in 1951, and has continued ever since.
But despite decades of research, fusion power today remains out of reach. In the 1970s, physicists began to describe fusion as “a very reliable science…a reactor was always just 20 years away.” While significant progress has been made — modern fusion reactors burn far hotter, for far longer, and produce much more power than early attempts — a net power-producing reactor has still not been built, much less one that can produce power economically. Due to the difficulty of creating the extreme conditions fusion reactions require, and the need to simultaneously solve scientific and engineering problems, advances in fusion have been slow. Building a fusion reactor has been described as like the Apollo Program, if NASA needed to work out Newton’s laws of motion as it was building rockets.
But there’s a good chance a working fusion reactor is near. Dozens of private companies are using decades of government-funded fusion research in their attempts to build practical fusion reactors, and it’s likely that at least one of them will be successful. If one is, the challenge for fusion will be whether it can compete on cost with other sources of low-carbon electricity.
Fusion basics
A fusion reaction is conceptually simple. When two atomic nuclei collide with sufficient force, they can fuse together to form a new, heavier nucleus, releasing particles (such as neutrons or neutrinos) and energy in the process. The sun, for instance, is chiefly powered by the fusion of hydrogen nuclei (protons) into helium in a series of reactions called the proton-proton chain.
Nuclear fusion is possible because when nucleons (protons and neutrons) get close enough, they are attracted to each other by the strong nuclear force. However, until they get extremely close, the positive charge from the protons in the nuclei will cause the nuclei to repel. This repulsion can be overcome if the nuclei are energetic enough (i.e., moving fast enough) when they collide, but even at high energies most nuclei will simply bounce off each other rather than fuse. To get self-sustained nuclear fusion, you thus need some combination of nuclei that are very high energy (i.e., high temperature), and given enough opportunities to collide to allow a fusion reaction to occur (achieved by high density and high confinement time). The temperatures required to achieve fusion are so high (tens or hundreds of millions of degrees) that electrons are stripped from the atoms, forming a cloud of negatively charged electrons and positively charged nuclei called a plasma. If the density and confinement time of the plasma are above a critical value (known as the Lawson Criterion), and the temperature is high enough, the plasma will achieve ignition: that is, heat from fusion reactions will be sufficient to keep the plasma at the required temperature, and the reaction will be self-sustaining. The product of temperature, density, and confinement time is known as the triple product, and it’s a common measure of performance of fusion reactors.
Because more protons means more electrostatic repulsion, the bigger the nucleus the harder it is to get it to fuse and the higher the Lawson Criterion (though actual interactions are more complex than this). Most proposed fusion reactors thus use very light elements with few protons as fuel. One combination in particular — the fusion of deuterium with tritium (an isotope of hydrogen with two extra neutrons) — is substantially easier than any other reaction, and it’s this reaction that powers most proposed fusion reactors.
The challenge of fusion is that it’s very difficult to keep the fuel at a high temperature and packed densely together for a long period of time. Plasma that touches the walls of a physical container would be cooled below the temperatures needed for fusion, and would melt the container in the process, as the temperature needed for fusion is above the melting point of any known material. How do you keep the plasma contained long enough to achieve fusion?
There are, broadly, three possible strategies. The first is to use gravity: Pile enough nuclear fuel together, and it will be heavy enough to confine itself. Gravitational force will keep the plasma condensed into a ball, allowing nuclear fusion to occur. Gravity is what keeps the plasma contained in the sun. But this requires enormous amounts of mass to work: Gravity is far too weak a force to confine the plasma in anything smaller than a star. It’s not something that can work on earth.
The second option is to apply force (such as from an explosion), to physically squeeze the fuel close enough together to allow fusion to occur. Any particles trying to escape will be blocked by other particles being squeezed inward. This is known as inertial confinement, and it’s the method of confinement used by hydrogen bombs, as well as by laser-based fusion methods.
The last method is to use a magnetic field. Because the individual particles in the plasma (positive nuclei and negative electrons) carry an electric charge, their motion can be influenced by the presence of a magnetic field. A properly shaped field can create a “bottle” that contains the plasma for long enough for fusion to occur. This method is known as magnetic confinement fusion, and is the strategy that most attempts to build a fusion reactor have used.
The spark of fusion power
Almost as soon as fusion was discovered, people began to think of ways to use it as a power source. The first fusion reaction was produced in a laboratory in 1933 by using a particle accelerator to fire deuterium nuclei at each other, though it fused such a tiny number of them (one out of every 100 million accelerated) that experimenter Ernest Rutherford declared that “anyone who expects a source of power from the transformation of these atoms is talking moonshine.” But just six years later, Oxford physicist Peter Thonemann created a design for a fusion reactor, and in 1947 began performing fusion experiments by creating superheated plasmas contained within a magnetic field. Around this same time another British physicist, G.P. Thomson, had similar ideas for a fusion reactor, and in 1947 two doctoral students, Stan Cousins and Alan Ware, built their own apparatus to study fusion in superheated plasmas based on Thompson’s work. This early British fusion work may have been passed to the Soviets through the work of spies Klaus Fuchs and Bruno Pontecorvo, and by the early 1950s, the Soviets had their own fusion power program.
In the U.S., interest in fusion was spurred by an announcement from the Argentinian dictator Juan Peron that his country had successfully achieved nuclear fusion in 1951. This was quickly shown to be false, and within a matter of months the physicist behind the work, Ronald Richter, had been jailed for misleading the president. But it triggered interest from U.S. physicists in potential approaches to building such a reactor. Lyman Spitzer, a physicist working on the hydrogen bomb, was intrigued, and within a matter of months had secured approval from the Atomic Energy Commission to pursue fusion reactor work at Princeton as part of the hydrogen bomb project, in what became known as Project Sherwood. Two more fusion reactor programs were added to Sherwood the following year, at Los Alamos (led by James Tuck) and Lawrence Livermore Lab (led by Richard Post).
Early fusion attempts all used magnetic fields to confine a superhot plasma, but they did so in different ways. In Britain, efforts were centered on using an “electromagnetic pinch.” By running an electric current through a plasma, a magnetic field would be created perpendicular to it, which would compress the plasma inward. With a strong enough current, the field would confine the plasma tightly enough to allow for nuclear fusion. Both Thomson and Thonemann’s fusion reactor concepts were pinch machines.
Pinches were pursued in the U.S. as well: James Tuck was aware of the British work, and built his own pinch machine. Skeptical as to whether such a machine would really be able to achieve fusion, Tuck called his machine the Perhapsatron. But other researchers tried different concepts. Spitzer’s idea was to wind magnets around the outside of a cylinder, creating a magnetic field to confine the plasma within. To prevent particles from leaking out either end, Spitzer originally planned to connect the ends together in a donut-shaped torus, but in such a configuration the magnetic coils would be closer together on the inside radius than on the outside, varying the field strength and causing particles to drift. To correct this, Spitzer instead wrapped his tube into a figure eight, which would cancel out the effect of the drift. Because it aimed to reproduce the type of reaction that occurred in the sun, Spitzer called his machine the stellarator.
Richard Post’s concept at Livermore likewise started as a cylinder with magnets wrapped around it to contain the plasma. But instead of wrapping the cylinder into a torus or figure eight, it was left straight: To prevent plasma from leaking out, a stronger magnetic field was created at each end, which would, it was hoped, reflect any particles that tried to escape. This became known as the magnetic mirror machine.
In the Soviet Union, work on fusion was also being pursued by hydrogen bomb researchers, notably Igor Tamm and Andrei Sakharov. The Soviet concept combined Spitzer’s stellarator and the British pinch machines. Plasma would be created in a torus, which would be confined both by external magnetic fields (as in the stellarator) and a self-generated magnetic field created by running a current through the plasma (as in the pinch machines). The Soviets called this a toroidal magnetic chamber, which they abbreviated “tokamak.”
Fusion proves difficult
Early on, it was hoped that fusion power might be a relatively easy problem to crack. After all, it had taken only four years after the discovery of nuclear fission to produce the world’s first nuclear reactor, and less than three years of development to produce the fusion-driven hydrogen bomb. Spitzer’s research plan called for a small “Model A” stellarator designed to heat a plasma to one million degrees and determine whether plasma could be created and confined. If successful, this would be followed by a larger “Model B” stellarator, and then an even larger “Model C” that would reach temperatures of 100 million degrees and in essence be a prototype power reactor. Within four years it would be known whether controlled fusion was possible. If it was, the Model C stellarator would be running within a decade.
But fusion proved to be a far more formidable problem than anticipated. The field of plasma physics was so new that the word “plasma” wasn’t even in common use in the physical sciences (physicists often got paper requests from medical journals assuming they studied the subject of blood plasma), and the behavior of plasmas was poorly understood. Spitzer and others’ initial work assumed that the plasma could be treated as a collection of independent particles, but when experiments began on the Model A stellarator it became clear that this theory was incorrect. The Model A could produce plasma, but it only reached about half its anticipated temperature, and the plasma dissipated far more quickly than predicted. The larger Model B managed to reach the one million degree mark, but the problem of rapid dissipation remained. Theory predicted that particles would flow along magnetic field lines smoothly, but instead the plasma was a chaotic, churning mass, with “shimmies and wiggles and turbulences like the flow of water past a ship” according to Spitzer. Plasmas in the Perhapsatron similarly rapidly developed instabilities.
Researchers pressed forward, in part because of concerns that the Soviets might be racing ahead with their own fusion reactors. Larger test machines were built, and new theories of magnetohydrodynamics (the dynamics of electrically conductive fluids) were developed to predict and control the behavior of the plasma. Funding for fusion research in the U.S. rose from $1.1 million in 1953 to nearly $30 million in 1958. But progress remained slow. Impurities within the reactor vessels interfered with the reactions and proved difficult to deal with. Plasmas continued to have serious instabilities and higher levels of particle drift that existing theories failed to predict, preventing reactors from achieving the densities and confinement times required. Cases of seemingly impressive progress often turned out to be an illusion, in part because even measuring what was going on inside a reactor was difficult. The British announced they had achieved fusion in their ZETA pinch machine in 1957, only to retract the announcement a few months later. What were thought to be fusion temperatures of the plasma were simply a few rogue particles that had managed to reach high temperatures. Similar effects had provided temporary optimism in American pinch machines. When the U.S., Britain, and the Soviet Union all declassified their fusion research in 1958, it became clear that no one was “racing ahead” — no one had managed to overcome the problems of plasma instability and achieve fusion within a reactor.
In response to these failures, researchers shifted their strategy. They had rushed ahead of the physics in the hopes of building a successful reactor through sheer engineering and empiricism, and now turned back towards building up a scaffolding of physical theory to try to understand plasma behavior. But success still remained elusive. Researchers continued to discover new types of plasma instabilities, and struggled to predict the behavior of the plasma and prevent it from slipping out of its magnetic confinement. Ominously, the hotter the plasma got, the faster it seemed to escape, a bad sign for a reactor that needed to achieve temperatures of hundreds of millions of degrees. Theories that did successfully predict plasma instabilities, such as finite resistivity, were no help when it came to correcting them. “People were calculating these wonderful machines, and they turned them on and they didn’t work worth a damn,” noted plasma physicist Harold Furth. By the mid-1960s, it was still not clear whether a plasma could be confined long enough to produce useful amounts of power.
But in the second half of the decade, a breakthrough occurred. Most researchers had initially considered the Soviet tokamak, which required both enormous magnets and a large electrical current through the plasma, a complex and cumbersome device, unlikely to make for a successful power reactor. For the first decade of their existence, no country other than the Soviet Union pursued tokamak designs.
But the Soviets continued to work on their tokamaks, and by the late 1960s were achieving impressive results. In 1968, they announced that in their T-3 and TM-3 tokamaks, they had achieved temperatures of 10 million degrees and confinement times of up to 20 thousandths of a second. While this was far below the plasma conditions needed for a power reactor, it represented substantial progress: The troubled Model C stellarator had only managed to achieve temperatures of one million degrees and confinement times of one-thousandth of a second. The Soviets also announced plans for even larger tokamaks that could confine the plasma for up to tenths of a second, long enough to demonstrate that controlled fusion was possible.
Outside the Soviet Union, researchers assumed this was simply another case of stray high-temperature particles confounding measurements. Soviet apparatuses for determining reactor conditions were infamously poor. To confirm their results, the Soviets invited British scientists, who had recently developed a laser-based high-temperature thermometer that would allow measuring the interior of a reactor with unprecedented accuracy. The British obliged, and in 1969 the results were confirmed. After a decade of limited progress, the future of fusion suddenly looked bright. Not only was a reactor concept delivering extremely promising results, but the new British measurement technology would enable further progress by giving far more accurate information on reactor temperatures.
The rush for tokamaks
In response to the tokamak breakthrough, countries around the world quickly began to build their own. In the U.S., at Princeton Spitzer’s Model C stellarator was torn apart and converted into a tokamak in 1970, and a second Princeton tokamak, the ATC, was built in 1972. MIT, Oak Ridge, and General Atomics all built their own tokamaks as well. Around the world, tokamaks were built in Britain, France, Germany, Italy, and Japan. By 1972 there were 17 tokamaks under construction outside the Soviet Union.
The breakthrough of the tokamak wasn’t the only thing driving increased fusion efforts in the 1970s. Thanks to the environmental movement, people were increasingly aware of the damage inflicted by pollution from fossil-fuel plants, and skeptical that fission-based nuclear power was a reasonable alternative. Energy availability, already a looming issue prior to 1973, suddenly became a crisis following OPEC’s oil embargo. Fusion’s promise of clean, abundant energy looked increasingly attractive. In 1967, U.S. annual fusion funding was just under $24 million dollars. 10 years later, it had ballooned to $316 million.
The burst of enthusiasm for tokamaks was coupled with another pivot away from theory and back towards engineering-based, “build it and see what you learn”-style research. No one really knew why tokamaks were able to achieve such impressive results. The Soviets didn’t progress by building out detailed theory, but by simply following what seemed to work without understanding why. Rather than a detailed model of the underlying behavior of the plasma, progress on fusion began to take place by the application of “scaling laws,” empirical relationships between the size and shape of a tokamak and various measures of performance. Larger tokamaks performed better: the larger the tokamak, the larger the cloud of plasma, and the longer it would take a particle within that cloud to diffuse outside of containment. Double the radius of the tokamak, and confinement time might increase by a factor of four. With so many tokamaks of different configurations under construction, the contours of these scaling laws could be explored in depth: how they varied with shape, or magnetic field strength, or any other number of variables.
In the U.S., this pivot towards practicality came with the appointment of Bob Hirsh as director of the Atomic Energy Commission’s fusion branch in 1972. Hirsh stated that “I came into the program with an attitude that I wanted to build a fusion reactor…I didn’t want to do plasma physics for plasma physics’ sake.” Hirsh believed that U.S. researchers had been too timid, too busy exploring theory instead of trying to make progress towards actual power reactors, and that progress would ultimately require building new, bigger machines before theory had been completely worked out. Efforts should be focused on the most promising concepts, and research efforts aimed at issues of actual power production. For instance, Hirsh favored running experiments with actual deuterium and tritium fuel instead of simply hydrogen; while hydrogen was far easier and less expensive to work with, it would not duplicate power reactor conditions. Hirsh canceled several research projects that were not making progress, and made plans for an extremely large tokamak at Princeton, the Tokamak Fusion Test Reactor (TFTR), that was designed to achieve scientific “breakeven”: getting more energy out of the plasma than was required to maintain it.1 Hirsh also funded a larger mirror machine experiment designed to achieve breakeven, in the hopes that either mirror machines or tokamaks would achieve the necessary progress, and achieve breakeven within 10 years.
And the U.S. wasn’t the only country racing towards breakeven. Japan had responded to the Soviet’s tokamak announcement by building their own tokamak, the JFT-2, and from there moved on to an even larger tokamak, the JT-60. The JT-60 was even larger than the TFTR and designed to achieve “equivalent breakeven” (not true breakeven, because it was not designed to use radioactive tritium fuel). And the Europeans had banded together to build the enormous Joint European Torus (JET) tokamak, also designed to achieve breakeven. By contrast, the Soviets would effectively leave the fusion race. The follow-up to the T-3, the larger T-10, was less capable than similar machines elsewhere in the world, and their planned machine to achieve breakeven, the T-20, would never be built.
The TFTR produced its first plasma in 1982, followed closely by JET six months later. The JT-60 came online a few years later in 1985. And beyond these large machines, numerous smaller reactors were being built. By the early 1980s there were nearly 300 other research fusion devices around the world, including 70 tokamaks, and worldwide fusion annual fusion funding was nearly $1.3 billion.
These research efforts were yielding results. In 1982, West German researchers stumbled upon conditions that created a plasma with superior density and confinement properties, which they dubbed high-mode or H-mode. In 1983, MIT researchers achieved sufficient density and confinement time (though not temperature) to achieve breakeven on their small Alcator-C tokamak. In 1986, researchers on the TFTR accidentally doubled their confinement time after an especially thorough process of reactor vessel cleaning, and soon other tokamaks could also produce these “supershots.” The TFTR was ultimately able to achieve the density, temperature, and confinement time needed for fusion, though not simultaneously.
But this progress was hard-won, and came frustratingly slowly. Neither the TFTR, JET, or the JT-60 would manage to achieve breakeven in the 1980s. The behavior of plasma was still largely confusing, as was the success of certain reactor configurations. Advances had been made in pinning down reactor parameters that seemed to produce good results, but there had been little progress on why they did so. At a 1984 fusion conference, physicist Harold Furth noted that despite the advances, tokamaks were still poorly understood, as were the physics behind H-mode. As a case in point, the JT-60 was unable to enter H-mode for opaque reasons, and ultimately needed to be rebuilt into the JT-60U to do so.
Despite decades of research, in the 1980s a working, practical fusion power reactor still seemed very far away. In 1985, Bob Hirsh, the man responsible for pushing forward U.S. tokamak research, stated that the tokamak was likely an impractical reactor design, and that years of research efforts had not addressed the “fatal flaws” of its complex geometry.
Governments had been spending hundreds of millions of dollars on fusion research with what seemed like little to show for it. And while a new source of energy had been an urgent priority in the 1970s, by the 1980s that was much less true. Between 1981 and 1986, oil prices fell almost 70% in real terms. Fusion budgets had risen precipitously in the 1970s, but now they began to be cut.
In the U.S., Congress passed the Magnetic Fusion Engineering Act in 1980, which called for a demonstration fusion reactor to be built by the year 2000, but the funds authorized by the act would never be allocated. The incoming Reagan administration felt that the government should be in the business of funding basic research, not designing and building new reactor technology that was better handled by the private sector. Fusion research funding in the U.S. peaked in 1984 at $468.5 million, but slowly and steadily declined afterward.
One after another, American fusion programs were scaled back or canceled. First to go was a large mirror machine at Livermore, the Mirror Fusion Test Facility. It was mothballed the day after construction was completed, and then canceled completely in 1986 without running a single experiment. Other mirror fusion research efforts were similarly gutted. The follow-up to the TFTR, the Compact Ignition Tokamak or CIT, which aimed to achieve plasma ignition, was canceled in 1990. An alternative to the CIT, the smaller and cheaper Burning Plasma Experiment or BPX, was canceled in 1991. Researchers responded by proposing an even smaller-scale reactor, the Tokamak Physics Experiment or TPX, but when the fusion budget was slashed by $100 million in 1996 this too was scrapped. The 1996 cuts were so severe that the TFTR itself was shut down after 15 years of operation.
In spite of these setbacks, progress in fusion continued to be made. In 1991 researchers at General Atomics discovered an even higher plasma confinement mode than H-mode, which they dubbed “very high mode” or VH mode. In 1992 researchers at JET achieved an extrapolated scientific breakeven, with a “Q” (the ratio of power out to power in) of 1.14. This was quickly followed by Japan’s JT-60U achieving an extrapolated Q of 1.2. (In both cases, this was a projected Q value based on what would have been achieved had deuterium-tritium fuel been used.) In 1994, the TFTR set a new record for fusion power output by producing 10 megawatts of power for a brief period of time, though only at a Q of 0.3. Theoretical understanding of plasma behavior also advanced, and plasma instabilities and turbulence became increasingly understood and predictable.
But without new fusion facilities to push the boundaries of what could be achieved within a reactor, such progress could only continue for so long. Specifically, after TFTR, JET and JT-60, what was needed was a machine large and powerful enough to create a burning plasma, in which more than 50% of its heating is generated from fusion.
In 1976, after the TFTR began construction, U.S. fusion researchers laid out several different possible fusion research programs, the machines and experiments they would need to run, and the dates they could achieve a demonstration power reactor. TFTR would be followed by an ignition test reactor that would be large enough to achieve burning plasma (as well as ignition). This would then be followed by an experimental power reactor, and then a demonstration commercial reactor. The more funding fusion received, the sooner these milestones could be hit. The most aggressive timetable, Logic V, had a demo reactor built in 1990, while the least aggressive Logic II pushed it back to 2005. Also included was a plan that simply continued then-current levels of fusion funding, advancing various basic research programs but with no specific plan to build the necessary test reactors. This plan is often referred to as “fusion never.”
Following its peak in the 1980s, American fusion funding has fallen consistently below the “fusion never” level, and none of the follow-up reactors to the TFTR needed to advance the state of the art have been built. As of today, no magnetic confinement fusion reactor has achieved either ignition or a burning plasma.
NIF and ITER
Amidst the wreckage of U.S. fusion programs, progress managed to continue along two fronts. The first was ITER (pronounced “eater”), the International Thermonuclear Experimental Reactor. ITER is a fusion test reactor being built in the south of France as an international collaboration between more than two dozen countries. When completed, it will be the largest magnetic confinement fusion reactor in the world, large enough to achieve a burning plasma. ITER is designed to achieve a Q of 10, the highest Q ever in a fusion reactor (the current record is 1.5), though it is not expected to achieve ignition.
ITER began in 1979 as INTOR, the International Tokamak Reactor, an international collaboration between Japan, the U.S., the Soviet Union, and the European energy organization Euratom. The INTOR program resulted in several studies and reactor designs, but no actual plans to build a reactor. Further progress came in 1985, after Mikhail Gorbachev’s ascent to leader of the Soviet Union. Gorbachev was convinced by Evgeny Velikhov that a fusion collaboration could help defuse Cold War tensions, and after a series of discussions INTOR was reconstituted as ITER in 1986.
The initial design for the ITER reactor was completed in 1997, and was far larger than any fusion reactor that had yet been attempted. ITER would output 1,500 megawatts of power, 100 times what JET could output, and also achieve plasma ignition. The project would cost $10 billion, and be completed in 2008.
But, unsurprisingly for an international megaproject, ITER stumbled. Japan asked for a three-year delay before construction began, and the U.S. pulled out of the project in 1998. It seemed as if ITER were on the verge of failing. But the reactor was redesigned to be smaller and cheaper, costing an estimated $5 billion but with a lower power output (500 megawatts) and without plans to achieve ignition. The U.S. rejoined the project in 2003, followed by China and South Korea.
Construction of the redesigned ITER kicked off in 2006, and the reactor was expected to come online in 2016. But since then the project has struggled. The completion date has been repeatedly pushed back, with current projections suggesting it will not be completed until 2035. “Official” costs have risen to $22 billion, with actual costs likely far higher.
The second avenue of progress since the 1990s has been on inertial confinement fusion. As discussed earlier, inertial confinement fusion can be achieved by using an explosion or other energy source to greatly compress a lump of nuclear fuel. Inertial confinement is what powers hydrogen bombs, but using it as a power source can be traced back to an early concept for a nuclear power plant proposed by Edward Teller in 1955. Teller proposed filling a huge underground cavern with steam, and then detonating a hydrogen bomb within it to drive the steam through a turbine.
The physicist tasked with investigating Teller’s concept, John Nuckols, was intrigued by the idea, but it seemed impractical. But what if instead of an underground cavern, you used a much smaller cavity just a few feet wide, and detonated a tiny H-bomb within it? Nuckols eventually calculated that with the proper driver to trigger the reaction, a microscopic droplet of deuterium-tritium fuel could be compressed to 100 times the density of lead and reach temperatures of tens of millions of degrees: enough to trigger nuclear fusion.
This seemed to Nuckols to be far more workable, but it required a driver to trigger the reaction: H-bombs used fission-based atom bombs to trigger nuclear fusion, but this wouldn’t be feasible for the tiny explosions Nuckols envisioned. At the time no such driver existed, but one would appear just a few years later, in the form of the laser.
Most sources of light contain a mix of wavelengths which limits how tightly they can be focused: The different wavelengths will focus in different places and the light will get spread out. But lasers generated light of a uniform wavelength, which allowed it to be focused tightly and deliver a large amount of power to an extremely small area. By focusing a series of lasers on a droplet of nuclear fuel, a thin outer layer would explode outward, driving the rest of the fuel inward and achieving nuclear fusion. In 1961, Nuckols presented his idea for a “thermonuclear engine” driven by lasers triggering a series of tiny fusion explosions.
From there, laser fusion progressed on a somewhat parallel path to magnetic confinement fusion. The government-funded laser fusion research alongside magnetic confinement fusion, and laser-triggered fusion was first successfully achieved by the private company KMS Fusion in 1974. In 1978 a series of classified nuclear tests suggested that it was possible to achieve ignition with laser fusion. And in 1980, while Princeton was building its enormous TFTR tokamak, Lawrence Livermore Lab was building the most powerful laser in the world, Nova, in the hopes of achieving ignition in laser fusion (ignition would not be achieved).
Part of the interest in laser fusion stemmed from its potential military applications. Because the process triggered a microscopic nuclear explosion, it could duplicate some of the conditions inside an H-bomb explosion, making it useful for nuclear weapons design. This gave laser fusion another constituency that helped keep funding flowing, but it also meant that laser fusion work was often classified, making it hard to share progress or collaborate on research. Laser fusion was often considered several years behind magnetic fusion.
Laser fusion research had many of the same setbacks as magnetic fusion: Plasma instabilities and other complications made progress slower than initially expected, and funding that had risen sharply in the 1970s began to be cut in the 1980s when usable power reactors still seemed very far away. But laser fusion’s fortunes turned in the 1990s. The U.S. ceased its nuclear testing in 1991, and in 1994 Congress created the Stockpile Stewardship Program to study and manage aging nuclear weapons and to keep nuclear weapons designers employed should they be needed in the future. As part of this program, Livermore proposed an enormous laser fusion project, the National Ignition Facility, that would achieve ignition in laser fusion and validate computer models of nuclear weapons. In 1995 the budget for laser fusion was $177 million; by 1999 it had nearly tripled to $508 million.
The National Ignition Facility (NIF) was originally planned to be completed in 2002 at a cost of $2.1 billion. But like ITER, initial projections proved to be optimistic. The NIF wasn’t completed until 2009 at a cost of $4 billion. And like so many fusion projects before it, achieving its goals proved elusive: In 2011 Department of Energy (DOE) Science Undersecretary Steve Koomin stated that “ignition is proving more elusive than hoped” and that “some science discovery may be required.” The NIF didn’t produce a burning plasma until 2021, and ignition until 2022.
And even though the NIF has made some advances in fusion power research, it’s ultimately primarily a facility for nuclear weapons research. Energy-based research makes up only a small fraction of its budget, and this work is often opposed by the DOE (for many years, Congress added laser energy research funding back into the laser fusion budget over the objections of the DOE). Despite the occasional breathless press release, we should not expect it to significantly advance the state of fusion power.
The rise of fusion startups
Up into the 2010s, this remained the state of fusion power. Small amounts of fusion energy research were done at the NIF and other similar facilities, and researchers awaited the completion of ITER to study burning plasma. Advances in fusion performance metrics, which had increased steadily during the second half of the 20th century, had tapered off by the early 2000s.
But over the past several years, we’ve seen a huge rise in fusion efforts in the private sector. As of 2023 there are 43 companies developing fusion power, most of which were founded in the last few years. Fusion companies have raised a collective $6.2 billion in funding, $4.1 billion of which has been in the last several years.
These investments have been highly skewed, with a handful of companies receiving the lion’s share of the funding. The 800-pound gorilla in the fusion industry is Commonwealth Fusion Systems, which has raised more than $2 billion in funding since it was founded in 2018. Other major players are TAE Technologies ($1.2 billion in funding), Helion Energy ($577 million in funding with another $1.7 billion of performance-based commitments), and Zap Energy ($200 million).
These companies are pursuing a variety of different strategies for fusion, ranging from tried and true tokamaks (Commonwealth), to pinch machines (Zap) to laser-based inertial confinement, to more exotic strategies. Helion’s fusion reactor, for instance, launches two rings of plasma at each other at more than one million miles an hour; when they collide, the resulting fusion reaction perturbs a surrounding magnetic field to generate electricity directly (most other fusion reactors generate electricity by using the high-energy particles released from the reactor to heat water into steam to drive a turbine.) The Fusion Industry Association’s 2023 report documents ten major strategies companies are pursuing for fusion, using several different fuel sources (though most companies are using the “conventional” methods of magnetic confinement and deuterium-tritium fuel).
Many of these strategies are being pursued with an eye towards building a reactor cheap enough to be practical. One drawback of deuterium-tritium reactions, for instance, is that much of their energy output is in the form of high-energy neutrons, which gradually degrade the interior of the reactor and make it radioactive. Some startups are pursuing alternative reactions that produce fewer high-energy neutrons. Likewise, several startups are using what’s known as a “field reversed configuration,” or FRC, which creates a compact plasma torus using magnetic fields rather than a torus-shaped reactor vessel. This theoretically will allow for a smaller, simpler reactor that requires fewer high-powered magnets. And Zap’s pinch machine uses a relatively simple reactor shape that doesn’t require high-powered magnets: Their strategy is based on solving the “physics risk” of keeping the plasma stable (a perennial problem for pinch machines) via a phenomenon known as shear-flow stabilization.
After so many years of research efforts funded almost exclusively by governments, why does fusion suddenly look so appealing to the private sector? In part, steady advances of science and technology have chipped away at the difficulties of building a reactor. Moore’s Law and advancing microprocessor technology have made it possible to increasingly accurately model plasma within reactors using gyrokinetics, and have enabled increasingly precise control over reactor conditions. Helion’s reactor, for instance, wouldn’t be possible without advanced microprocessors that can trigger the magnets rapidly and precisely. Better lasers have improved the prospects for laser-based inertial confinement fusion. Advances in high-temperature superconductors have made it possible to build extremely powerful magnets that are far less bulky and expensive than were previously possible. Commonwealth Fusion’s approach, for instance, is based entirely on using novel magnet technology to build a much smaller and cheaper tokamak: Their enormous $1.8 billion funding round in 2021 came after they successfully demonstrated one of these magnets, which uses superconducting magnetic tape.
But there are also non-technical, social reasons for the sudden blossoming of fusion startups. In particular, Commonwealth Fusion’s huge funding round in 2021 created a bandwagon effect: It showed it was possible to raise significant amounts of money for fusion, and investors began to enter the space wanting to back their own Commonwealth.
None of these startups have managed to achieve breakeven or a burning plasma, but many are considered quite promising approaches by experts, and it’s believed that at least some will successfully build a power reactor.
How to think about fusion progress
After more than 70 years of research and development, we still don’t have a working fusion power reactor, much less a practical one. How should we evaluate the progress that’s occurred?
It’s common to hear from fusion boosters that despite the lack of a working power reactor, the rate of progress in fusion has nevertheless been impressive. You often see this graph comparing improvement in the fusion triple product against Moore’s Law, suggesting progress in fusion has in some sense been as fast or faster than progress in semiconductors. You can create similar graphs for other measures of fusion performance, such as reactor power output.
I think these sorts of comparisons are misleading. The proper comparison for fusion isn’t to a working technology that is steadily being improved, but to a very early-stage technology still being developed. When we make this comparison, fusion’s progress looks much less impressive.
Consider, for instance, the incandescent light bulb. An important measure of light bulb performance is how long it lasts before burning out: Edison’s experiment in 1879 with a carbon thread filament lasted just 14.5 hours (though this was a vast improvement over previous filaments, which often failed in an hour or so). By 1880 some of Edison’s bulbs were lasting nearly 1,400 hours. In other words, performance increased in a single year by a factor of 100 to 1,000.
Similarly, consider radio. An important performance metric is how far a signal can be transmitted and still be received. Early in the development of radio, this distance was extremely short. When Marconi first began his radio experiments in his father’s attic in 1894, he could only achieve transmission distances of about 30 feet, and even this he couldn’t do reliably. But in 1901, Marconi successfully transmitted a signal across the Atlantic between Cornwall and Newfoundland, a distance of about 2,100 miles. In just seven years Marconi increased radio transmission distance by a factor of 370,000.
Likewise, the world’s first artificial nuclear reactor, the Chicago Pile-1, produced just half a watt of energy in 1942. In 1951, the Experimental Breeder Reactor-I, the first nuclear reactor to generate electricity, generated 100,000 watts of power. And in 1954, the first submarine nuclear reactors generated 10,000,000 watts of power, a 20-million factor of improvement in 12 years.
In the very early stages of a technology, when it barely works at all, a Moore’s Law rate of improvement — doubling in performance every two years — is in fact extremely poor. If a machine that barely works gets twice as good, it still only barely works. A jet engine that fails catastrophically after a few seconds of operation (as the early jet engines did) is still completely useless if you double the time it can run before it fails. Technology with near-zero performance must often get thousands or millions of times better, and many, perhaps most, technologies exhibit these rates of improvement early in their development. If they didn’t, they wouldn’t get developed: Few backers are willing to pour money into a technology for decades in the hopes that eventually it will get good enough to be actually useful.
From this perspective, fusion is notable not for its rapid pace of progress, but for the fact that it’s been continuously funded for decades despite a comparatively slow pace of development. If initial fusion efforts had been privately funded, development would have ceased as soon as it was clear that Spitzer’s simplified model of an easy-to-confine plasma was incorrect. And in fact, Spitzer had assumed that development would cease if the plasma wasn’t easy to confine: The fact that development continued anyway is somewhat surprising.
Fusion’s comparatively slow pace of development can be blamed on the fact that the conditions needed to achieve fusion are monumentally difficult to create. For most technology, the phenomena at work are comparatively simple to manipulate, to the point where many important inventions were created by lone, self-funded individuals. Even something like a nuclear fission reaction is comparatively simple to create, to the point where reactors can occur naturally on earth if there’s a sufficient quantity of fuel in the right conditions. But creating the conditions for nuclear fusion on earth — temperatures in the millions or billions of degrees — is far harder, and progress is necessarily attenuated.
Fusion progress looks better if we compare it to other technologies not from when development on them began, but from when the phenomena they leverage were first discovered and understood. Fusion requires not just building a reactor, but exploring and understanding an entirely new phenomenon: the behavior of high-temperature plasmas. Building the scientific understanding of a phenomena inevitably takes time: Marconi needed just a few years to build a working radio, but his efforts were built on decades of scientific exploration of the behavior of electrical phenomena. Ditto with Edison and the light bulb. And the behavior of high temperatures plasmas, which undergo highly turbulent flow, is an especially knotty scientific problem. Fusion shouldn’t be thought of as a technology that has made rapid progress (it hasn’t), but as a combination of both technological development and scientific investigation of a previously unexplored and particularly difficult-to-predict state of matter, where progress on one is often needed to make progress on the other.
We see this dynamic at work in the history of fusion, where researchers are constantly pulled back and forth between engineering-driven development (building machines to see how they work) and science-driven development (developing theory to the point where it can guide machine development), between trying to build a power reactor and figuring out if it’s possible to build a power reactor at all. Efforts constantly flipped back and forth between these two strategies (going from engineering-driven in the 50s, to science-driven in the 60s, to engineering-driven in the 70s), and researchers were constantly second-guessing whether they were focusing on the right one.
The bull and bear cases for fusion
Despite decades of progress, it’s still not clear, even to experts within the field, whether a practical and cost-competitive fusion reactor is possible. A strong case can be made either way.
The bull case for fusion is that for the last several decades there’s been very little serious effort at fusion power, and now that serious effort is being devoted to the problem, a working power reactor appears very close. The science of plasmas and our ability to model, understand, and predict them has enormously improved, as have the supporting technologies (such as superconducting magnets) needed to make a practical reactor. ITER, for instance, was designed and planned in the early 2000s and will cost tens of billions. Commonwealth’s SPARC reactor will be able to approach its performance in many ways (it aims for a Q > 10 and a triple product around 80% of ITERs), but at a projected fraction of the cost. With so many well-funded companies entering the space, we’re on the path towards a virtuous cycle of improvement: More fusion companies means it becomes worthwhile for others to build more robust fusion supply chains, and develop supporting technology like mass-produced reactor materials, cheap high-capacity magnets, working tritium breeding blankets, and so on. This allows for even more advances and better reactor performance, which in turn attracts further entrants. With so many reactors coming online, it will be possible to do even more fusion research (historically, machine availability has been a major research bottleneck), allowing even further progress. Many industries require initial government support before they can be economically self-sustaining, and though fusion has required an especially long period of government support, it’s on the cusp of becoming commercially viable and self-sustaining. At least one of the many fusion approaches will be found to be highly scalable and possible to build reasonably sized reactors at a low cost, and fusion will become a substantial fraction of overall energy demand.
The bear case for fusion is that, outside of unusual approaches like Helion’s (which may not pan out), fusion is just another in a long line of energy technologies that boil water to drive a turbine. And the conditions needed to achieve fusion (plasma at hundreds of millions or even billions of degrees) will inevitably make fusion fundamentally more expensive than other electricity-generating technologies. Even if we could produce a power-producing reactor, fusion will never be anywhere near as cheap as simpler technology like the combined-cycle gas turbine, much less future technologies like next-generation solar panels or advanced geothermal. By the time a reactor is ready, if it ever is, no one will even want it.
Perhaps the strongest case for fusion is that fusion isn’t alone in this uncertainty about its future. The next generation of low-carbon electricity generation will inevitably make use of technology that doesn’t yet exist, be that even cheaper, more efficient solar panels, better batteries, improved fission reactors, or advanced geothermal. All of these technologies are somewhat speculative, and may not pan out — solar and battery prices may plateau, advanced geothermal may prove unworkable, etc. In the face of this risk, fusion is a reasonable bet to add to the mix.
How to Build an AI Data Center
This piece is the first in a new series called Compute in America: Building the Next Generation of AI Infrastructure at Home. In this series, we examine the challenges of accelerating the American AI data center buildout. Future pieces will be shared here.
We often think of software as having an entirely digital existence, a world of “bits” that’s entirely separate from the world of “atoms.” We can download endless amounts of data onto our phones without them getting the least bit heavier; we can watch hundreds of movies without once touching a physical disk; we can collect hundreds of books without owning a single scrap of paper.
But digital infrastructure ultimately requires physical infrastructure. All that software requires some sort of computer to run it. The more computing that is needed, the more physical infrastructure is required. We saw that a few weeks ago when we looked at the enormous $20 billion facilities required to manufacture modern semiconductors. And we also see it with state-of-the-art AI software. Creating a cutting-edge Large Language Model requires a vast amount of computation, both to train the models and to run them once they’re complete. Training OpenAI’s GPT-4 required an estimated 21 billion petaFLOP (a petaFLOP is 10^15 floating point operations).[ref 1] For comparison, an iPhone 12 is capable of roughly 11 trillion floating point operations per second (0.01 petaFLOP per second), which means that if you were able to somehow train GPT-4 on an iPhone 12, it would take you more than 60,000 years to finish. On a 100 Mhz Pentium processor from 1997, capable of a mere 9.2 million floating-point operations per second, training would theoretically take more than 66 billion years. And GPT-4 wasn’t an outlier, but part of a long trend of AI models getting ever larger and requiring more computation to create.
But, of course, GPT-4 wasn’t trained on an iPhone. It was trained in a data center, tens of thousands of computers and their required supporting infrastructure in a specially-designed building. As companies race to create their own AI models, they are building enormous compute capacity to train and run them. Amazon plans on spending $150 billion on data centers over the next 15 years in anticipation of increased demand from AI. Meta plans on spending $37 billion on infrastructure and data centers, largely AI-related, in 2024 alone. Coreweave, a startup that provides cloud and computing services for AI companies, has raised billions of dollars in funding to build out its infrastructure and is building 28 data centers in 2024. The so-called “hyperscalers,” technology companies like Meta, Amazon, and Google with massive computing needs, have enough estimated data centers planned or under development to double their existing capacity. In cities around the country, data center construction is skyrocketing.
But even as demand for capacity skyrockets, building more data centers is likely to become increasingly difficult. In particular, operating a data center requires large amounts of electricity, and available power is fast becoming the binding constraint on data center construction. Nine of the top ten utilities in the U.S. have named data centers as their main source of customer growth, and a survey of data center professionals ranked availability and price of power as the top two factors driving data center site selection. With record levels of data centers in the pipeline to be built, the problem is only likely to get worse.
The downstream effects of losing the race to lead AI are worth considering. If the rapid progress seen over the last few years continues, advanced AI systems could massively accelerate scientific and technological progress and economic growth. Powerful AI systems could also be highly important to national security, enabling new kinds of offensive and defensive technologies. Losing the bleeding edge on AI progress would seriously weaken our national security capabilities, and our ability to shape the future more broadly. And another transformative technology largely invented and developed in America would be lost to foreign competitors.
AI relies on the availability of firm power. American leadership in innovating new sources of clean, firm power can and should be leveraged to ensure the AI data center buildout of the future happens here.
How data centers work
A data center is a fundamentally simple structure: a space that contains computers or other IT equipment. It can range from a small closet with a server in it, to a few rooms in an office building, to a large, stand-alone structure built specifically to house computers.
Large-scale computing equipment has always required designing a dedicated space to accommodate it. When IBM came out with its System/360 in 1964, it provided a 200-page physical planning manual that gave information on space and power needs, operating temperature ranges, air filtration recommendations, and everything else needed for the computers to operate properly. But historically, even large computing operations could be done within a building mostly devoted to other uses. Even today, most “data centers” are just rooms or floors in multi-use buildings. According to the EIA, there were data centers in 97,000 buildings around the country as of 2012, including offices, schools, labs, and warehouses. These data centers, typically about 2,000 square feet in size, occupy just 2% of the building they’re in, on average.
What we think of as modern data centers, specially-built massive buildings that house tens of thousands of computers, are largely an artifact of the post-internet era. Google’s first “data center” was 30 servers in a 28 square-foot cage, in a space shared by AltaVista, eBay, and Inktomi. Today, Google operates millions of servers in 37 purpose-built data centers around the world, some of them nearly one million square feet in size. These, along with thousands of other data centers around the world, are what power internet services like web apps, streaming video, cloud storage, and AI tools.
A large, modern data center contains tens of thousands of individual computers, specially designed to be stacked vertically in large racks. Racks hold several dozen computers at a time, along with other equipment needed to operate them, like network switches, power supplies, and backup batteries. Inside the data center are corridors containing dozens or hundreds of racks.
The amount of computer equipment they house means that data centers consume large amounts of power. A single computer isn’t particularly power hungry: A rack-mounted server might use a few hundred watts, or about 1/5th the power of a hair dryer. But tens of thousands of them together create substantial demand. Today, large data centers can require 100 megawatts (100 million watts) of power or more. That’s roughly the power required by 75,000 homes, or needed to melt 150 tons of steel in an electric arc furnace.[ref 2] Power demand is so central, in fact, that data centers are typically measured by how much power they consume rather than by square feet (this CBRE report estimates that there are 3,077.8 megawatts of data center capacity under construction in the US, though exact numbers are unknown). Their power demand means that data centers require large transformers, high-capacity electrical equipment like switchgears, and in some cases even a new substation to connect them to transmission lines.
All that power eventually gets turned into heat inside the data center, which means it requires similarly robust equipment to move that heat out as swiftly as power comes on. Racks sit on raised floors, and are kept cool by large volumes of air pulled up from below and through the equipment. Racks are typically arranged to have alternating “hot aisles” (where hot air is exhausted) and “cold aisles” (where cool air is pulled in). The hot exhaust is removed by the data center’s cooling systems, chilled, and then recirculated. These cooling systems might be complex, with multiple “cooling loops” of heat exchange fluids, though nearly all data centers use air to cool the IT equipment itself.
These cooling systems are large, unsurprisingly. The minimum amount of air needed to remove a kilowatt of power is roughly 120 cubic feet per minute; for 100 megawatts, that means 12 million cubic feet per minute. Data center chillers have cooling systems with thousands of times the capacities of a typical home air conditioner. Even relatively small data centers will have enormous air ducts, high-capacity chilling equipment, and large cooling towers. This video shows a data center with a one million gallon “cold battery” water tank: Water is cooled down during the night, when power is cheaper, and used to reduce the burden on the cooling systems during the day.
Because of the amount of power they consume, substantial effort has gone into making data centers more energy efficient. A common data center performance metric is power usage effectiveness (PUE), the ratio of the total power consumed by a data center to the amount of power consumed by its IT equipment. The lower the ratio, the less power is used on things other than running computers, and the more efficient the data center.
Data center PUE has steadily fallen over time. In 2007, the average PUE for large data centers was around 2.5: For every watt used to power a computer, 1.5 watts were used on cooling systems, backup power, or other equipment. Today, the average PUE has fallen to a little over 1.5. And the hyperscalers do even better: Meta’s average data center PUE is just 1.09, and Google’s is 1.1. These improvements have come from things like more efficient components (such as uninterruptible power supply systems with lower conversion losses), better data center architecture (changing to a hot-aisle, cold-aisle arrangement), and operating the data center at a higher temperature so that less cooling is required.
There have also been efficiency improvements after the power reaches the computers. Computers must convert AC power from the grid into DC power; on older computers, this conversion was only 60-70% efficient, but modern components can achieve conversion efficiencies of up to 95%. Older computers would also use almost the same amount of power whether they were doing useful work or not. But modern computers are more capable of ramping their power usage down when they’re idle, reducing electricity consumption. And the energy efficiency of computation itself has improved over time due to Moore’s Law: Smaller and smaller transistors mean less electricity is required to run them, which means less power is required for a given amount of computation. From 1970 to 2020, the energy efficiency of computation has doubled roughly once every 1.5 years.
Because of these steady increases in data center efficiency, while individual data centers have grown larger and more power-intensive, power consumption in data centers overall has been surprisingly flat. In the U.S., data center energy consumption doubled between 2000 and 2007 but was then flat for the next 10 years, even as worldwide internet traffic increased by more than a factor of 20. Between 2015 and 2022, worldwide data center energy consumption rose an estimated 20 to 70%, but data center workloads rose by 340%, and internet traffic increased by 600%.
Beyond power consumption, reliability is another critical factor in data center design. A data center may serve millions of customers, and service interruptions can easily cost tens of thousands of dollars per minute. Data centers are therefore designed to minimize the risk of downtime. Data center reliability is graded on a tiered system, ranging from Tier I to Tier IV, with higher tiers more reliable than lower tiers.[ref 3]
Most large data centers in the U.S. fall somewhere between Tier III and Tier IV. They have backup diesel generators, redundant components to prevent single points of failure, multiple independent paths for power and cooling, and so on. A Tier IV data center will theoretically achieve 99.995% uptime, though in practice human error tends to reduce this level of reliability.
Data center trends
Over time, the trend has been for data centers to grow larger and consume greater amounts of power. In the early 2000s, a single rack in a data center might use one kilowatt of power. Today, typical racks in an enterprise data center use 10 kilowatts or less, and in a hyperscaler data center, that might reach 20 kilowatts or more. Similarly, 10 years ago, nearly all data centers used fewer than 10 megawatts, but a large data center today will use 100 megawatts or more. And companies are building large campuses with multiple individual data centers, pushing total power demand into the gigawatt range. Amazon’s much-reported purchase of a nuclear-powered data center was one such campus; it included an existing 48 MW data center and enough room for expansion to reach 960 MW in total capacity. As hyperscalers occupy a larger fraction of total data center capacity, large data centers and campuses will only become more common.
Today data centers are still a small fraction of overall electricity demand. The IEA estimates that worldwide data centers consume 1 to 1.3% of electricity as of 2022 (with another 0.4% of electricity devoted to crypto mining). But this is expected to grow over time. SemiAnalysis predicts that data center electricity consumption could triple by 2030, reaching 3 to 4.5% of global electricity consumption. And because data center construction tends to be highly concentrated, data centers are already some of the largest consumers of electricity in some markets. In Ireland, for example, data centers use almost 18% of electricity, which could increase to 30% by 2028. In Virginia, the largest market for data centers in the world, 24% of the power sold by Virginia Power goes to data centers.
Power availability has already become a key bottleneck to building new data centers. Some jurisdictions, including ones where data centers have historically been a major business, are curtailing construction. Singapore is one of the largest data center hubs in the world, but paused construction of them between 2019 and 2022, and instituted strict efficiency requirements after the pause was lifted. In Ireland, a moratorium has been placed on new data centers in the Dublin area until 2028. Northern Virginia is the largest data center market in the world, but one county recently rejected a data center application for the first time in the county’s history due to power availability concerns.
In the U.S., the problem is made worse by difficulties in building new electrical infrastructure. Utilities are building historically low amounts of transmission lines, and long interconnection queues are delaying new sources of generation. Data centers can be especially challenging from a utility perspective because their demand is more or less constant, providing fewer opportunities for load shifting and creating more demand for firm power. One data center company owner claimed that the U.S. was nearly “out of power” for available data centers, primarily due to insufficient transmission capacity. Meta CEO Mark Zuckerberg has made similar claims, noting that “we would probably build out bigger clusters than we currently can if we could get the energy to do it.” One energy consultant pithily summed up the problem as “data centers are on a one to two-year build cycle, but energy availability is three years to none.”
Part of the electrical infrastructure problem is a timing mismatch. Utility companies see major electrical infrastructure as a long-term investment to be built in response to sustained demand growth. Any new piece of electrical infrastructure will likely be used far longer than a data center might be around, and utilities can be reluctant to build new infrastructure purely to accommodate them. In some cases, long-term agreements between data centers and utilities have been required to get new infrastructure built. An Ohio power company recently filed a proposal that would require data centers to buy 90% of the electricity they request from the utility, regardless of how much they use. Duke Energy, which supplies power to Northern Virginia, has similarly introduced minimum take requirements for data centers that require them to buy a minimum amount of power.
Data center builders are responding to limited power availability by exploring alternative locations and energy sources. Historically, data centers were built near major sources of demand (such as large metro areas) or major internet infrastructure to reduce latency.[ref 4] But lack of power and rising NIMBYism in these jurisdictions may shift their construction to smaller cities, where power is more easily available. Builders are also experimenting with alternatives to utility power, such as local solar and wind generation connected to microgrids, natural gas-powered fuel cells, and small modular reactors.
The influence of AI
What impact will AI have on data center construction? Some have projected that AI models will become so large, and training them so computationally intensive, that within a few years data centers might be using 20% of all electricity. Skeptics point out that historically increasing data center demand has been almost entirely offset by increased data center efficiency. They point to things like Nvidia’s new, more efficient AI supercomputer (the GB200 NVL72), more computationally efficient AI models, and future potential ultra-efficient chip technologies like photonics or superconducting chips as evidence that this trend will continue.
We can divide the likely impact of AI on data centers into two separate questions: the impact on individual data centers and the regions where they’re built and the impact of data centers overall on aggregate power consumption.
For individual data centers, AI will likely continue driving them to be larger and more power-intensive. As we noted earlier, training and running AI models requires an enormous amount of computation, and the specialized computers designed for AI consume enormous amounts of power. While a rack in a typical data center will consume on the order of 5 to 10 kilowatts of power, a rack in an Nvidia superPOD data center containing 32 H100s (special graphics processing units, or GPUs, designed for AI workloads that Nvidia is selling by the millions) can consume more than 40 kilowatts. And while Nvidia’s new GB200 NVL72 can train and run AI models more efficiently, it consumes much more power in an absolute sense, using an astonishing 120 kilowatts per rack. Future AI-specific chips may have even higher power consumption. Even if future chips are more computationally efficient (and they likely will be), they will still consume much larger amounts of power.
Not only is this amount of power far more than what most existing data centers were designed to deliver, but the amount of exhaust heat begins to bump against the boundaries of what traditional, air-based cooling systems can effectively remove. Conventional air cooling is likely limited to around 20 to 30 kilowatt racks, perhaps 50 kilowatts if rear heat exchangers are used. One data center design guide notes that AI demands might require such large amounts of airflow that equipment will need to be spaced out, with such large airflow corridors that IT equipment occupies just 10% of the floor space of the data center. For its H100 superPOD, Nvidia suggests either using fewer computers per rack, or spacing out the racks to spread out power demand and cooling requirements.
Because current data centers aren’t necessarily well-suited for AI workloads, AI demand will likely result in data centers designed specifically for AI. SemiAnalysis projects that by 2028, more than half of data centers will be devoted to AI. Meta recently canceled several data center projects so they could be redesigned to handle AI workloads. AI data centers will need to be capable of supplying larger amounts of power to individual racks, and of removing that power when it turns into waste heat. This will likely mean a shift from air cooling to liquid cooling, which uses water or another heat-conducting fluid to remove heat from computers and IT equipment. In the immediate future, this probably means direct-to-chip cooling, where fluid is piped directly around a computer chip. This strategy is already used by Google’s tensor processing units (TPUs) designed for AI work and for Nvidia’sGB200 NVL72. In the long term, we may see immersion cooling, where the entire computer is immersed in a heat-conducting fluid.
Regardless of the cooling technology used, the enormous power consumption of these AI-specific data centers will require constructing large amounts of new electrical infrastructure, such as transmission lines, substations, and firm sources of low-carbon power, to meet tech companies’ climate goals. Unblocking the construction of this infrastructure will be critical for the U.S. to keep up in the AI race.
Our second question is what AI’s impact will be on the aggregate power consumption of data centers. Will AI drive data centers to consume an increasingly large fraction of electricity in the US, imperiling climate goals? Or will increasing efficiency mean a minimal increase in data center power consumption in aggregate, even as individual AI data centers grow monstrous?
This is more difficult to predict, but the outcome is likely somewhere in between. Skeptics are correct to note that historically data center power consumption rose far less than demand, that chips and AI models will likely get more efficient, and that naive extrapolation of current power requirements is likely to be inaccurate. But there’s also reason to believe that data center power consumption will nevertheless rise substantially. In some cases, efficiency improvements are being exaggerated. The efficiency improvement of Nvidia’s NVL72 is likely to be far less in practice than the 25x number used by Nvidia for marketing purposes. Many projections of power demand, such as those used internally by hyperscalers, already take future efficiency improvements into account. And while novel, ultra-lower power chip technologies like superconducting chips or photonics might be plausible options in the future, these are far-off technologies that will do nothing to address power concerns over the next several years.
In some ways, there are far fewer opportunities for data center energy reductions than there used to be. Historically, data center electricity consumption was flat largely due to increasing PUE (less electricity spent on cooling, UPS systems, etc). But many of these gains have already been achieved: the best data centers already use just 10% of their electricity for cooling and other non-IT equipment.
Skeptics also fail to appreciate how enormous AI models are likely to become, and how easily increased chip efficiency might get eaten by demands for more computation. Internet traffic took roughly 10 years to increase by a factor of 20, but cutting-edge AI models are getting four to seven times as computationally intensive every year. Data center projections by SemiAnalysis, which take into account factors such as current and projected AI chip orders, tech company capital expenditure plans, and existing data center power consumption and PUE, suggest that global data center power consumption will more than triple by 2030, reaching 4.5% of global electricity demand. Regardless of aggregate trends, rising power demands for individual data centers will still create infrastructure and siting challenges that will need to be addressed.
Where we’re headed
The rise of the internet and its digital infrastructure has required the construction of vast amounts of physical infrastructure to support it: data centers that hold tens of thousands of computers and other IT equipment. And as demands on this infrastructure rose, data centers became ever larger and more power-intensive. Modern data centers demand as much power as a small city, and campuses of multiple data centers can use as much power as a large nuclear reactor.
The rise of AI will accelerate this trend, requiring even more data centers that are increasingly power-intensive. Finding enough power for them will become increasingly challenging. This is already starting to push data center construction to areas with available power, and as demand continues to increase from data center construction and broader electrification, the constraint is only likely to get more binding.
The Case for a NIST Foundation
Executive summary
The National Institute of Standards and Technology’s (NIST) mission is to promote American competitiveness through advancing measurement science, standards, and emerging technologies. Despite its size — it receives just half a percent of federal R&D funding — the agency has an impressive track record. Thanks to its strong history of performance, today NIST is a key agency underpinning the government’s plan to maintain American leadership in emerging technologies. In recent years its responsibilities have expanded to include:
- Distributing $39 billion in incentives and $11 billion in R&D investments to develop semiconductor manufacturing in the United States.[ref 1]
- Developing the foundational science of AI measurement and safety.[ref 2]
- Building cybersecurity guidelines to help protect the nation against increasingly sophisticated cyber attacks.[ref 3]
However, NIST may not be adequately equipped to deliver on its mission. Resource gaps have left the agency with crumbling buildings and leaking ceilings, damaging cutting-edge scientific equipment and forcing workers out of their offices.[ref 4] The rapid growth of industry salaries for emerging technology jobs means that NIST can no longer compete with the private sector for top talent. China’s manipulation of international standards organizations means that U.S. experts risk losing influence over which technical standards succeed on the world stage. And during an era of rapid progress in emerging technologies like artificial intelligence, the agency is constrained by the slow pace of new funding and hiring. Appropriations can solve some of these problems, but not all. Federal hiring regulations, the slow pace of the appropriations cycle, and restrictions on how agencies can engage with the private sector and foreign entities mean that NIST’s ability to deliver on its mission faces limits.
Many agencies have faced these problems in the past, and there exists an underrated mechanism for addressing them. Congress has long used “agency foundations” as a flexible vehicle to complement agencies’ missions by deploying philanthropic investment.[ref 5] The Foundation for the National Institutes of Health (FNIH) supports NIH by funding fellowships to attract top scientists to the agency.[ref 6] The Center for Disease Control’s (CDC) foundation hosts an emergency response fund, which raised nearly $600 million in the early stages of the COVID-19 pandemic to distribute 8.5 million pieces of PPE and hire more than 3,000 surge health workers.[ref 7] The Foundation for Food and Agriculture Research (FFAR) supports the Department of Agriculture by hosting ambitious prize competitions in service of the agency’s mission.[ref 8] These foundations have been an efficient mechanism for amplifying their agency’s work, with funds raised greatly exceeding their annual appropriations for administration, averaging a return of $67 for every $1 in federal contributions.[ref 9]
We believe it’s time for a foundation for NIST, to give the agency the flexibility that other R&D agencies already enjoy. Much like existing foundations, a NIST foundation should not substitute for appropriations. Instead, it should provide support for activities that NIST is not well-suited to do itself. In this report, we propose four ways a NIST foundation could supercharge the agency’s work on emerging technologies:
- Strengthen the United States’ influence over global standards by supporting increased private sector participation in standard-setting, especially from startups and small- and medium-sized enterprises (SMEs).
- Attract top scientists and engineers to NIST to work on research missions in the national interest, by hosting an ambitious technical fellowship and comprehensive benefits program.
- Give NIST the ability to respond rapidly to new technological developments by quickly spinning up new public-private partnerships.
- Accelerate the adoption of emerging technologies by incubating new emerging technology consortia and hosting ambitious prize competitions.
Recently, the bipartisan Expanding Partnerships for Innovation and Competitiveness (EPIC) Act passed the House Science Committee,[ref 10] garnering endorsements from more than forty high-profile science and innovation organizations and leading voices, including four former NIST directors.[ref 11] The bill seeks to establish a foundation for NIST, and lays out a sensible framework for its design. The House and Senate should prioritize its full passage. In doing so, they should pay close attention to two key challenges: ensuring effective safeguards against conflicts of interest between the foundation and its donors, and preventing needless delays in getting the foundation to full operational capacity during a critical period.
Why NIST?
NIST’s role among federal agencies is unique. It is not a regulator, and it doesn’t focus on a particular set of scientific or technical domains. Rather, its focus is at a higher level of abstraction: the science of measurement (“metrology”), and using that science to help create standards for technologies.
Why is the science of measurement useful for technological innovation? Once it’s possible to measure something, it then becomes possible to test it and make it better. Technical standards provide industries with a common language to facilitate global trade, and enable scientists and engineers to work on common goals that cut across technical disciplines. NIST’s mission is thus tightly linked with American innovation and technological competitiveness.[ref 12]
A conference of transportation officials funded by the Rockefeller Foundation led to the development of “National School Bus Chrome,” or Color 13432 in NIST’s Federal Standard No. 595a. The color was selected based on careful deliberation to maximize conspicuousness and the legibility of black writing in semi-darkness.
The NIST deadweight machine, the largest machine of its kind in the world, is a three-story, million-pound stack of steel disks used to measure the thrust of powerful engines, providing the means to calibrate new jet and rocket technologies. Image credit: Jennifer Lauren Lee/NIST.
In pursuit of its mission, NIST has punched far above its weight. Of the 19 Nobel Prizes in Physics awarded to American scientists since 2000, 3 (~15%) were for work done at the NIST.[ref 13] A further two Nobel Prizes in Physics were directly enabled by measurement work done at NIST.[ref 14] These achievements have come despite NIST receiving less than half a percent of federal R&D funding, and far less than other R&D-focused agencies.
NIST has played an especially important role in facilitating technological innovation in emerging technologies. A few examples:
- Biotechnology: In recent decades, a new class of drugs called “biologics” has emerged to supplement chemically synthesized drugs like Aspirin. Biologics are built from proteins and DNA using living systems like microorganisms, plants, and animal cells, and are used to treat a wide range of conditions, from arthritis to cancer. But compared to their simpler chemical counterparts, biologics are much harder to manufacture — being composed of living matter, they’re almost impossible to replicate perfectly. These slight differences between batches create issues for quality control and safety. To address this, NIST developed “NISTmAb,” a stable reference molecule with extremely well-understood properties.[ref 15] Biologics manufacturers use NISTmAb to test their measurement and manufacturing tools work as intended, and the molecule has been widely adopted by the biopharmaceutical industry (both in the U.S. and internationally) since its release in 2016.
- Quantum information science: In 1994, Peter Shor of Bell Labs showed that a hypothetical sufficiently powerful quantum computer could break RSA encryption, the technology used across the internet to keep data private and secure.[ref 16] In subsequent decades, quantum computing progressed from theory to rudimentary processors, and some scientists began to predict even odds that quantum computers powerful enough to break RSA encryption would emerge by 2031.[ref 17] In response to these developments, in 2016 NIST launched the Post-Quantum Cryptography Standardization program to discover and implement new “quantum-resistant” cryptography schemes, and in 2022 released an initial set of four algorithms.[ref 18]
- Artificial intelligence: For years before the current AI boom, NIST has worked behind the scenes on the science of AI measurement and the development of AI standards. The MNIST database, famous in the field of machine learning, was originally developed at NIST in 1995 and has served as one of the most important benchmarks for testing and comparing image recognition algorithms, one of the earliest applications for neural networks.[ref 19] Other AI-focused standards developed at NIST include the AI Risk Management Framework, the Face Recognition Vendor Test, and standards for high-performance computing security.
- Space flight: It costs around $1,000 to launch a pound of payload into low Earth orbit using SpaceX’s Falcon series rockets. Given the price, and the costs of space payloads themselves, it’s important to ensure that rocket launches work reliably. Success requires the force generated by the rocket to be known and controllable, but the amount of force generated by rockets makes measurement extremely difficult. The NIST “deadweight machine” (the largest machine of its kind in the world) solves this problem — a three-story, million-pound stack of steel disks used to calibrate the large-force sensors used throughout the U.S. aviation and space industries.[ref 20]
In recent years, NIST’s role in emerging technologies has dramatically expanded. NIST is the home of the CHIPS Program Office and the CHIPS R&D Office, responsible for distributing $39 billion in incentives and $11 billion in R&D to strengthen semiconductor manufacturing and innovation in America. It has also been given a prominent role in implementing three recent executive orders on cybersecurity, biotechnology, and artificial intelligence. In cybersecurity, NIST has been entrusted to develop guidelines to protect government data and critical infrastructure from increasingly sophisticated cyber attacks. In AI, NIST has been tasked with what is perhaps one of the most challenging scientific endeavors of our time: building the science of AI measurement and safety from its current pre-paradigmatic state to a point where we can accurately evaluate the capabilities and mitigate the potential risks of new systems.
Despite its critical role, we are not equipping NIST to meet these challenges. But there is an underrated mechanism that can address many of NIST’s most urgent problems. “Agency foundations,” which Congress has long used, can flexibly complement agencies’ missions by deploying philanthropic investment.
What are agency foundations, and why are they useful?
Since the 1980s, Congress has created seven R&D-focused nonprofit foundations to support the missions of different agencies.[ref 21] Each foundation solicits philanthropic investment to complement its agency’s work while guarding against potential conflicts of interest with donors. Agency foundations have generally been quite successful in amplifying their agency’s mission, with funds raised greatly exceeding their annual appropriations for administration, averaging a return of $67 for every $1 in federal contributions.[ref 22]
Because of differences in the missions and structures of different agencies, these foundations all play different roles. Generally, however, they focus on programs within one or more of the following areas:
- Attracting and retaining top talent
- Coordinating broad and/or urgent projects through public-private partnerships
- Supporting commercialization of federal R&D
Across these areas, many R&D-focused agencies have benefited heavily from the support of their affiliated foundations. However, one R&D agency conspicuously lacks the support of a foundation: NIST. Perhaps more than any other agency, NIST is deeply linked to the future of emerging technology in the United States, as it helps define the technical foundation for innovation in artificial intelligence, biotechnology, quantum information sciences, and next-generation communications. We believe it’s time for a new foundation to support NIST’s work.
What should a NIST foundation do?
Ultimately, the priorities of an agency foundation are determined by its board, which works to support the agency’s mission based on the strengths, capability gaps, and needs of the agency. The role of an agency foundation is not to substitute for Congressional appropriations, but instead to provide support for activities that the agency is not well-suited to do itself. Different R&D-focused agency foundations therefore have different focuses, but generally fall within areas they’re uniquely positioned to execute, such as attracting and retaining talent, coordinating broad or urgent projects, and aiding with technology transfer and commercialization.
Compared to other R&D-focused agencies, NIST’s key strength is in working with industry, serving as a trusted, neutral convener. It also has an unparalleled level of programmatic efficiency, delivering high-quality programs across a wide range of technical domains with a relatively small budget. NIST also fulfills a critical role in the U.S. emerging technology ecosystem, building the fundamental measurement techniques, tools, and frameworks that advance R&D, enabling the development of technical standards.
Like many other agencies, NIST often lacks the ability to attract top technical talent in emerging technology fields, is constrained in spinning up new projects due to slow funding allocations and procurement processes, and has limits on how it can engage with the private sector and foreign entities. At a time when leadership in emerging technologies has never been more important, a NIST foundation can help fill these gaps, serving as a force multiplier for NIST’s work. Here, we present four concrete ways a NIST foundation could do this.
1. Strengthen the United States’ role in global standard-setting
The competition for emerging technology leadership relies in part on how well a country’s firms and experts influence the development of technical standards. Standards help shape which products are successful in global markets, and which value systems are embedded in technologies.
American success in shaping international standards comes from the fact that technical standard-setting in the international arena is led and conducted by private sector experts (often from competing firms) and academia in an open and consensus-based fashion, with governments playing an ancillary role. This inherently plays to the United States’ strengths as an innovation ecosystem.
Standards are also voluntary and nonbinding, meaning that if a bad faith actor manages to get a bad standard approved, they can’t force anyone to use it. This ensures that market forces can pick winners. For example, when international standards for shipping containers were first developed in the 1960s by the International Organization for Standardization (ISO), the Soviet Union pushed the ISO shipping container committee to adopt its alternative shipping container specification, instead of the version supported by U.S. firms.[ref 23] Rather than holding up the standards process to reach a consensus on a single standard, the committee decided to simply release multiple standards, and let the market decide. After demand for the Soviets’ alternative shipping container was non-existent, their preferred variant was removed from the standard several years later.[ref 24]
However, this good faith approach to standards development is not impervious to challenges. In contrast to the U.S. industry-led system, China puts the state at the core of its standards development activities:
- China has begun to develop and use China-specific (rather than international) standards domestically as a protectionist measure.[ref 25]
- China also uses development initiatives to promote Chinese standards abroad. China’s 2021 national standards strategy openly calls for use of the Belt and Road Initiative (BRI) to promote the use of Chinese standards outside China.[ref 26] In June 2019, China announced it had signed 85 agreements on technical standardization with 49 countries and regions as part of the BRI.[ref 27]
- Several reports have emerged of Chinese initiatives to fix the outcome of votes on standards by forcing Chinese participants to vote on the government-preferred option, rather than the option with the strongest technical merit.[ref 28] U.S. policymakers are also concerned about China embedding policy objectives (such as a lax approach to user privacy against surveillance) into technical standards for artificial intelligence and facial recognition.
Despite debate over the nature of new threats to the international standards system,[ref 29] the solution to ensuring continued U.S. leadership is likely the same as it has always been: Give U.S. technical experts the time and resources to develop high-quality standards, and let the market decide.
Within this scope, a foundation should focus on areas that are beyond NIST’s ability or remit to deliver. One promising focus area might be helping industry actors who face structural barriers to engaging in international standards development, such as startups and smaller firms.[ref 30] One standards expert estimates it costs a firm about $300,000 a year for one engineer to engage fully in the standard-setting process, a price many startups and academic experts can’t afford.[ref 31] Hosting international standards meetings in the United States is also becoming rarer due to a lack of funding for meetings compared to other countries, increasing the cost of participation for U.S. companies and academic experts.[ref 32]
While it would be advantageous to incentivize engagement from startups and small businesses in the international standard-setting process, for NIST itself to preference smaller firms over others might compromise its position as a neutral arbiter for industry. NIST is also limited in the kinds of activities it can fund, especially when it comes to international travel. However, a foundation would not have these same restrictions, and could fund and/or manage a number of different programs aimed at removing structural barriers to standards engagement for smaller firms and academic experts.
Specifically, a foundation could:
- Subsidize travel and salary costs for U.S. technical experts from startups and small businesses, to help them engage in both domestic and international standard-setting for emerging technologies. [ref 33]
- Provide funding, event hosting, and logistics services to enable more international standards meetings to be hosted in the U.S.
- Provide support for public-private partnerships to help U.S. experts new to standards-setting processes learn how to be effective at a standards development organization (SDO).[ref 34]
2. Attract top scientists and engineers to NIST to work on key missions in the national interest
NIST employs around 3,400 personnel across the United States, and its workforce has included some of the world’s top scientists and engineers. David Wineland, awarded the Nobel Prize in Physics for developing new methods to measure and manipulate quantum systems, conducted his work during a 42-year tenure at NIST.[ref 35] Unbeknownst to many, NIST also hosts approximately 2,700 “associates” — guest researchers from academia, industry, and other government agencies. Despite their much shorter tenure relative to the average NIST staffer, NIST associates also have an impressive history of scientific achievement. Dan Shechtman, a visiting researcher in 1982, won the Nobel Prize in Chemistry for his discovery at NIST of crystal structures previously unknown to science.[ref 36]
Dan Shechtman (middle) at NIST in the 1980s during the research sabbatical that led to his Nobel Prize-winning discovery of “quasicrystals.”’ The Nobel Committee stated that his discovery “forced scientists to reconsider their conception of the very nature of matter.” Image credit: NIST.
But in today’s most important emerging technology fields, private sector salaries far outpace what NIST and other R&D-focused agencies can offer. As a result, NIST’s legacy of attracting and retaining Nobel-level technical talent is at risk. A Government Accountability Office report found that NIST faces stiff competition and declining applications for highly specialized candidates.[ref 37]
In artificial intelligence, top industry AI labs offer compensation many multiples higher than top scientists and engineers can expect working for the federal government.[ref 38] If NIST is to rapidly advance the scientific study of AI measurement, evaluation, and safety, it will need top AI science and engineering talent to keep up with the pace of AI progress.
Besides the field of artificial intelligence, many of NIST’s other laboratory programs work at the frontiers of measurement science in domains where top talent is sorely needed. For example, NIST’s quantum technology research and metrology programs need top physicists and mathematicians to help build standards for securing encryption techniques against attacks from powerful quantum computers, and to develop new measurement techniques that will allow quantum computers to operate in a much wider variety of temperature ranges than is currently possible.[ref 39] As with AI, salaries for these roles in industry far exceed what top scientists can get at NIST.[ref 40]
Much like analogous programs at the Department of Agriculture’s Foundation for Food and Agriculture Research,[ref 41] the Department of Defense’s Henry M. Jackson Foundation,[ref 42] and the Foundation for the NIH,[ref 43] a fellowship would focus on recruiting world-class scientists and engineers to work on specific challenge areas defined in coordination with the agency. Doing this through a foundation allows for compensation closer to what top talent can attract in industry.
In addition to providing funding for fellowship salaries, a NIST foundation could provide branding and marketing to boost the program’s prestige, and services to help manage and place fellows. Fellows could be placed within NIST using the Intergovernmental Personnel Act, which allows the temporary assignment of personnel between nonprofits/academia and federal government agencies.[ref 44]
Compared to other privately funded technical fellowships in government agencies, housing a fellowship through an agency foundation comes with two key benefits:
- Greater transparency and accountability. Agency foundations are established by an act of Congress, which determines their broad mission, selects the board, and provides funding for administration. This gives Congress greater oversight over how a fellowship program is run.
- Close ties to the agency’s needs. Thanks to a close relationship with the agency (agency leadership typically has a non-voting position on their foundation’s board), an agency foundation is uniquely positioned to meet NIST’s talent needs, identifying useful skill sets and placing fellows in appropriate programs.
Another useful talent-related role a NIST foundation could take on is helping to make NIST a more attractive place to work, especially for associates. At present, NIST cannot provide many forms of benefits or compensation to its guest researchers, such as health insurance, payment for work-related travel, hosting events, or even recognizing their work through awards or ceremonies. A foundation could fill these gaps, and make NIST a more attractive place for all staff through benefits like events, training, and conference travel. In addition, a foundation could donate the best equipment and tooling to support work in key challenge areas, like compute access for artificial intelligence research.
3. Give NIST the broad ability to respond rapidly to new technological developments
Throughout its history, NIST has been called upon for assistance in times of need. In the closing months of World War II, the U.S. 86th Infantry Division captured a Nazi official in the town of Mattsee, Austria, and made an incredible discovery:[ref 45] The official was in possession of the Crown Jewels of Hungary, treasures almost 1,000 years old.[ref 46] Fearing the jewels could be captured by the Soviets, the Hungarian Crown Guard asked the United States to keep the delicate treasures safe. The agency tasked with covertly securing the delicate treasures? NIST’s precursor, the National Bureau of Standards. The Bureau had already proven its treasure-preserving skills in 1940, when it was commissioned to build state-of-the-art encasements for the U.S. Constitution and Declaration of Independence. The Bureau was enlisted to design protective containers for the jewels and to secretly accompany them across the Atlantic for storage in the United States, to keep them safe until they could be returned to Hungary.
The Hungarian Crown Jewels, consisting of the Crown, Sword, and Globuc Cruciger. Image credit: Qorilla Schopenhauer/Wikimedia Commons.
In subsequent decades, NIST has developed a track record of responding to less exciting, but just as important technology and standards-related emergencies. After NIST’s investigation of the World Trade Center collapse, the agency was authorized to establish similar teams to investigate the scientific lessons of building failures. The National Construction Safety Team (NCST) Act gives NIST the responsibility to dispatch teams of experts within 48 hours of major building disasters to understand their cause, and to recommend changes to relevant standards to help prevent future tragedies.[ref 47]
However, outside the domain of construction, NIST lacks a broad capability to rapidly respond across different scientific domains. This is not a unique gap. Due to slow funding velocity and constraints under the Federal Acquisition Regulation (FAR), the principal set of rules governing procurement by federal agencies, many agencies also struggle with spinning up new projects quickly when urgency is required, like in health crises. Foundations can help fill such gaps in two ways: first, by leveraging their flexible funding and connections with the private sector to coordinate a large number of technical partners. Second, by providing project management services and rapid fundraising to scale up projects quickly when urgency is required. Examples of where this has been successful include:
- In January 2020, two months before the World Health Organization officially declared COVID-19 a pandemic, the CDC Foundation activated its emergency response fund, soliciting nearly $600 million in donations to distribute more than 8.5 million pieces of PPE for frontline workers and hire more than 3,000 surge healthcare staff to support pandemic relief efforts.[ref 48] The Foundation for the NIH also established and managed the Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) project, coordinating across agencies, academia, and industry.[ref 49]
- NIH has historically struggled to work with a wide range of partners on broad projects to tackle ambitious research agendas at the frontier of health sciences. The Foundation for the NIH has been the home for a number of such projects, including the Biomarkers Consortium, the Accelerating Medicines Partnership, the Alzheimer’s Disease Neuroimaging Initiative, and the Grand Challenges in Global Health Initiative.[ref 50]
- The FDA operates the Sentinel Initiative, an active surveillance system that monitors the safety and effectiveness of drugs and other medical products by aggregating electronic healthcare data across the country. To make the system available outside of the FDA, the Reagan-Udall Foundation for the FDA launched the IMEDS project.[ref 51]
Areas where this would be useful might include:
- Slowing the spread of new pathogens during a pandemic: When developing tests and protective equipment to slow the spread of new pathogens, manufacturers urgently need standardized reference materials to measure the effectiveness of their products. During the COVID-19 pandemic, NIST helped develop safe synthetic fragments of RNA that manufacturers can use to calibrate their instruments and develop quality controls,[ref 52] and designed tests to measure the filtration performance of mask fabrics against the virus.[ref 53] However, it still took 6 months from the start of the pandemic for reference materials to be developed, after which they were freely distributed to laboratories across the globe.[ref 54] With access to rapid funding to proactively develop reference materials for new pathogens as they emerge, this time could be shortened, potentially saving millions of lives.
- Stopping the spread of dangerous new street drugs: Of the roughly 107,000 drug overdose deaths in the United States in 2021, around 70,000 involved fentanyl.[ref 55] Just a few years earlier, fentanyl deaths were almost non-existent. This reflects a growing problem: New drugs are emerging at a rapid pace, making it hard to track and combat them. NIST’s Rapid Drug Analysis and Research (RaDAR) conducted a trial with the Maryland Department of Health to analyze residues from illegally purchased drugs to investigate the prevalence of new substances. Much to its surprise, RaDAR found that the animal tranquilizer xylazine[ref 56] had become widespread, showing up in over 60% of samples.[ref 57] Like many dangerous drugs, fentanyl and xylazine are primarily manufactured in foreign labs and smuggled into the United States.[ref 58] If rapid detection techniques such as those developed by RaDAR could be scaled up and used at key locations across the country, it could be possible to build an early warning system to combat new dangerous drugs as they emerge.
- Rapid capability evaluations for the risks and benefits of new AI systems: Over the last 10 years, AI has surpassed human baselines for performance in domains such as reading comprehension and grade school math, and is rapidly approaching human performance in programming.[ref 59] The rate at which humans are being surpassed at new tasks also seems to be increasing. Due to the nature of deep learning, however, predicting these capabilities in advance is challenging. In June 2021, a panel of expert forecasters was asked to predict when AI systems would achieve 50% accuracy on a benchmark of challenging competition-level math problems.[ref 60] The forecasters predicted AI systems would reach this capability level by 2025. In reality, the 50% target was hit just 11 months later (as of today, it sits at around 90%).[ref 61] In the future, new kinds of AI systems will likely emerge that require the rapid development of evaluation techniques in new domains. New evaluation techniques are important to develop early, both to understand the risks of wide deployment of a new system and also to understand where new systems can be leveraged to help solve important problems. While NIST’s AI Safety Institute has been set up to focus on developing new ways of measuring the capabilities of AI systems,[ref 62] its ability to rapidly deploy resources to test new kinds of AI systems is limited by the yearly appropriations process, which is in turn limited by Congress’ ability to predict AI capabilities over a year-long time horizon. A foundation could help support NIST’s work on evaluating AI capabilities by providing emergency funding for evaluating new AI breakthroughs at a pace more in keeping with the rapid pace of new AI advances, rather than the pace dictated by the appropriations process.
Across these and other areas, a foundation would enable NIST to quickly respond to unexpected challenges on a timeline not suitable for the appropriations process, and in cases where rapid, proactive engagement of private sector and academic experts is crucial.
4. Accelerate the adoption of emerging technologies by supporting pre-standardization research
Before a standard for a new technology is possible, technical experts need good answers to research questions like:
- What are the possible applications of the technology?
- What are the potential downsides of the technology?
- What are the design paradigms for the technology?
- How can the technology’s performance be measured and tested?
The process of answering these questions is called “pre-standardization.” Timely and effective pre-standardization for a new technology helps experts in the field proactively address the technology’s potential downsides and speed up its adoption in the marketplace.
NIST is intimately involved in the measurement and testing part of the pre-standardization process. It also sometimes takes a more active role, convening technical experts and publishing guidelines. One such example is cloud computing. Initially, many firms and research groups were hesitant to adopt the technology due to concerns about vendor lock-in and security. Anticipating how such concerns could slow the growth of the U.S. cloud computing industry, in 2011, NIST released a standards roadmap for cloud computing, helping to plan out a set of required standards while the industry was in its infancy.[ref 63] It then helped create a vendor-neutral reference architecture to increase interoperability and portability,[ref 64] helping to address concerns about lock-in, as well as a security reference architecture to address security and privacy concerns.[ref 65]
However, pre-standardization is most useful when there is broad participation from experts in the field. By default, the kind of research done by industry and academia is typically focused on capabilities and applications. Research on areas like measurement, testing, interoperability, and risks is comparatively neglected, especially relative to its importance for technology adoption.
Within this scope, a NIST foundation could lean on its ability to be forward-looking and to flexibly deploy funding to a wide range of actors. Specifically, a foundation could:
(i) Incubate new consortia in emerging technology areas
Public-private consortia such as QED-C (for quantum computing),[ref 66] VAMAS (for materials science),[ref 67] and the Standards Coordinating Body for Regenerative Medicine[ref 68] have been incredibly useful for pre-standardization in their respective fields. A NIST foundation could take a forward-looking approach to anticipate which emerging technology fields might benefit from new consortia for pre-standardization, and provide funding to set them up and engage a wide range of technical experts.
(ii) Host ambitious prize competitions to incentivize pre-standardization research
NIST has a long history of issuing challenges to bring technical communities together to solve ambitious problems. In 1972, NIST’s precursor, the National Bureau of Standards, initiated a new project to improve computer security by developing a shared encryption standard, issuing two challenges to the public to develop new algorithms.[ref 69] The winning algorithm (submitted by IBM) was enshrined as the first Data Encryption Standard in 1977 and quickly adopted internationally.[ref 70] Over the last 15 years, we’ve seen an increased focus on a different kind of challenge to try to incentivize useful R&D: prize competitions. In a prize competition, a reward (usually cash) is offered to participants to achieve a specific goal. In principle, prize competitions have a range of benefits as a mechanism for promoting innovation:
- They transfer the downside risk to participants, allowing the party offering the prize to set an ambitious goal without risking significant financial loss.
- They can attract a wide range of participants taking different approaches; useful for building a technical community of interest around a problem.
- More broadly, they can boost the prestige of working on the problem; useful for field-building in a new technical domain.
These properties make prize competitions a promising method for incentivizing broad participation in pre-standardization research. The America COMPETES Act of 2010, which gave agencies broad authority to carry out prize competitions, increased the total amount of money offered by federal prize competitions from around $250,000 in 2011 to over $37 million in 2018.[ref 71] With this authority, NIST can offer cash prizes and appoint judges from the private sector to evaluate submissions.[ref 72]
However, like many agencies, prize competitions form only a tiny part of NIST’s spending on R&D and innovation — about 0.1% of its total R&D budget since 2011.[ref 73] Prizes are still seen as an unconventional way to fund R&D, and, compared to more conventional funding mechanisms such as grants, project leads may see prizes as too high-risk to take large bets on. Additionally, under the America COMPETES Act, the head of the agency must personally approve any prize competition offering greater than $1 million, and a competition greater than $50 million cannot be offered without written notice to Congress, with a 30-day waiting period. If we look at the distribution of prize amounts offered by the private sector compared to prizes offered by U.S. government agencies, larger prizes (>$1 million) form a much smaller proportion of government competitions, compared to private competitions.
Distribution of prize pools in private vs. government prize competitions
Violin plots of the distribution of prize pools across private vs. U.S. government-sponsored prize competitions. All prizes above $100,000 USD are included, with monetary values inflation-adjusted using CPI data to reflect May 2024 values.[ref 74]
A foundation could work with emerging technology consortia and programs across NIST to identify unsolved technical problems for which a prize competition might be a promising path to a solution. The Foundation for Food and Agriculture Research (FFAR) has taken on a similar role for the Department of Agriculture, recently launching the $6 million Egg-Tech Prize to help develop technologies to determine an egg’s sex before it hatches.[ref 75] Running prize competitions through an agency foundation comes with a number of potential benefits:
- The foundation could offer larger prizes with less administrative friction and risk for NIST, and could stand up large prizes more quickly when urgency is useful.
- The foundation could expand NIST’s effective bandwidth to manage prize competitions across multiple areas, by taking on the administrative work of marketing and running the competition.
- The foundation could coordinate with its industry partners to ensure that promising entries have a path toward rapid adoption, scaling, and commercialization, including by helping to manage IP agreements.
Specific research agendas where a prize competition (and pre-standardization research in general) could be effective include:
- Enabling materials science breakthroughs for semiconductors: Continuing to push the state of the art in semiconductor manufacturing will likely require new high-performance materials with which to build chips. However, it is common for materials scientists to report impressive results for proposed new materials without providing reproducible results.[ref 76] NIST’s Multiscale Modeling and Validation of Semiconductor Materials and Devices program aims to address this by helping to build predictive tools that can take a description of a new material (e.g., its chemical formula or atomic structure) and predict its properties (e.g., the temperature at which a superconductor transitions from a normal conductive state to a superconductive state). [ref 77] To do this accurately, benchmarks are needed to evaluate the validity of the predictions. To this end, in February 2023 NIST released the JARVIS-Leaderboard: an open-source collection of community-submitted benchmarks against which predictive tools for materials science can be evaluated.[ref 78] To incentivize the development of high-quality benchmarks and predictive models, a NIST foundation could offer public prizes for top contributors to the leaderboard.
- Developing new high-accuracy techniques for DNA sequencing: DNA sequencers take long strings of a person’s DNA, and analyze them to determine their genetic code (represented as a sequence of letters). The code can then be compared to a well-defined “reference” sequence to identify differences in the two codes. Doing this process efficiently and accurately is highly useful for diagnosing genetic disorders, tailoring personalized medicines based on an individual’s genetic makeup, and advancing basic genetic research. However, many DNA sequencers have biases or blind spots for certain sequences (usually those of non-European descent, due to a lack of representation in genetic databases) that contribute to uncertainties or errors — this can lead to hundreds of thousands of disagreements between different sequencing results for the same individual. To address this, NIST’s Genome in a Bottle Consortium is developing reference DNA material for a broader range of genetic backgrounds.[ref 79] Additionally, NIST’s Genome Editing Consortium seeks to build reference materials and benchmarks to measure the effectiveness of gene editing techniques.[ref 80] A NIST foundation could leverage this work by launching a prize competition to incentivize the development of breakthrough new gene sequencing and editing technologies. Submissions to the competition could then be evaluated using NIST’s suite of benchmarks and measurement tools.
- Complex capabilities benchmarks for artificial intelligence: In the context of AI, a “benchmark” is a standardized, systematic evaluation of a system against a particular set of questions or tasks. For example, SWE-bench contains a set of tasks that would normally be given to software engineers, to evaluate AI systems’ programming capabilities in a real-world-analogous context.[ref 81] While benchmarks are extremely useful, they are time-consuming to create, and most of today’s widely used benchmarks are lists of multiple-choice questions. To complement the NIST AI Safety Institute’s work on AI evaluation and measurement, a NIST foundation could launch a public competition to develop new benchmarks (or tasks within a broader benchmark) that are more useful for tracking and predicting AI capabilities with real-world consequence, like the ability to autonomously perform scientific research in different fields.
Challenges & next steps for a NIST Foundation
Recently, the bipartisan Expanding Partnerships for Innovation and Competitiveness (EPIC) Act passed the House Science Committee, gaining endorsements from more than forty science and innovation organizations and leading voices, including four former NIST directors.[ref 82] The bill provides the basic framework for a NIST foundation, consisting of measures that:
- Establish the foundation, called the “Foundation for Standards and Metrology.”
- Lay out a broad mission (supporting NIST’s mission to advance measurement science and technical standards) and an indicative set of activities (international engagement, public-private partnerships, facility expansion/improvement, commercialization, education/outreach, NIST associate support).
- Set out the structure of the board. This consists of eleven voting members chosen by NIST’s Director from a list of candidates provided by the National Academies,[ref 83] and four non-voting members: NIST’s Director and the Associate Directors of each of NIST’s main divisions.
- Direct the NIST Director to designate a set of NIST employees to serve as liaisons to the foundation.
- Establish the position of a board-appointed Executive Director to run the foundation.
- Stipulate that foundation policies must hold all employees and board members in the foundation to conflict of interest standards, and prohibit employees or board members from participating in decision-making in an area where they have financial interests.
- Commit the foundation to submit a document to Congress describing its operational and financial goals, transparency processes, and plan for ensuring “maximum complementarity and minimum redundancy” with investments made by NIST.
- Commit the foundation to annual audits and yearly public reports, including full information on persons and organizations from which financial support is received.
- Authorize annual appropriations of $1.5 million for the foundation’s administration for five years (fiscal years 2025 through 2029), after which the foundation is intended to become financially self-sustaining.
These are sensible measures, and the House and Senate should prioritize the full passage of the bill. When doing so, we recommend they (together with NIST and other relevant stakeholders) keep in mind potential challenges and lessons learned from existing agency foundations:
(i) Conflicts of interest can manifest in subtle ways, and a NIST foundation’s policies and culture should be designed with this risk in mind
A large part of a foundation’s job is to build relationships with potential donors. These relationships must be coupled with clear rules and communication about the decision-making process, how donated funds will be used, and what kinds of engagement in the process the donor is entitled to. In practice, establishing clear firewalls between donors and the agency can be difficult for foundations to achieve:
- In 2015, the NFL backed out of a $16 million commitment to NIH research on traumatic brain injury after the NIH selected a grantee that wasn’t preferred by the NFL.[ref 84] While an investigation found that the integrity of the process was preserved, it also highlighted that FNIH (the NIH’s foundation) staff who managed the relationship with the NFL were too hesitant to chastise the NFL for inappropriate behavior and remind the league of the terms of their agreement.
- In 2018, the NIH canceled a $100 million study on the effects of moderate alcohol use, after finding that NIH personnel were engaged in inappropriate collusion with industry, while deliberately keeping FNIH in the dark about what was going on.[ref 85] While the FNIH was cleared of any explicit wrongdoing in an investigation, a report from NIH suggested that, with better screening, FNIH staff may have been able to uncover evidence of an ongoing conflict of interest.[ref 86]
A foundation for NIST should learn from these examples, and ensure that effective conflict-of-interest policies are developed and regularly revisited.[ref 87] In addition to written policies, it’s important that a NIST foundation creates a strong culture of integrity that sets clear boundaries with donors. This will require, at minimum, thoughtful hiring of its staff.
(ii) Delays are likely, but can to some extent be anticipated and mitigated
The coming months and years seem likely to be a critical period in emerging technology development. Firms are making massive investments in artificial intelligence and biotechnology, fields that seem likely to shape the global landscape of commercial competitiveness and national power in the 21st century. Many governments are pushing forward with new governance frameworks for these same technologies. At the same time, standards for these technologies are mostly either high-level or not yet conceived. This means that the fast establishment of a NIST foundation could give NIST a high amount of flexibility and support during a period where success in its mission is particularly important.
Despite the benefits of moving quickly, delays are likely. The foundation for the Department of Energy (the Foundation for Energy and Security Innovation, or FESI) was authorized in the CHIPS and Science Act (passed in August 2022), which appropriated $1.5 million for its establishment in fiscal year 2023, and designated $30 million for its activities in fiscal year 2024.[ref 88] Last month, the Department of Energy announced the selection of FESI’s board, suggesting FESI is around a year behind schedule.[ref 89] Once a foundation has been established, it will also take time to reach its potential. In some cases, this can take years. The Foundation for the NIH was established in 1990, but it wasn’t launched until 1996, and it took another seven years for it to arrive at its current focus on public-private research partnerships.[ref 90].
Some potential challenges for establishing a NIST foundation can likely be anticipated and mitigated ahead of time. We suggest that:
- NIST should contact the National Academies early to establish a strategy and timeline for board selection, conditional on the passage of the EPIC Act.
- NIST should ensure it has a realistic plan (again, conditional on the passage of the Act) for allocating person-hours to move quickly to meet its part of the responsibilities in establishing the foundation, including funding to properly vet potential board members.
- Interested donors should proactively communicate to Congress, NIST, and the National Academies about the kinds of programs they would be excited to fund through a foundation.
Congress, NIST, and the National Academies are well-equipped to address these challenges. When doing so, it will be important to keep in mind the considerable benefits that an effective foundation would yield. With scalable and flexible support from a foundation, NIST will be much better positioned to support continued U.S. leadership in emerging technologies.
Acknowledgments
We thank Phil Singerman, Courtney Silverthorn, Nigel Cory, Walter Copan, David Hart, and Jason Matusow for their valuable feedback and input on this piece. Please note that participation does not necessarily imply endorsement of our conclusions.
Compute in America: Building the Next Generation of AI Infrastructure at Home
Broadly capable AI systems are poised to become an engine of economic growth and scientific progress. The fundamental insight behind this engine is “the bitter lesson”: AI methods that can independently learn by better taking advantage of massive amounts of computation (or “compute”) vastly outcompete methods that are hand-crafted to encode human knowledge.
Nowhere is this more evident than in today’s large language models. Despite the massive capability gains in state-of-the-art language models over the last six years, the key difference between today’s best-performing models and GPT-1 (released in 2018) is scale. The foundational technical architecture remains largely the same, but the amount of computation used in training has increased by a factor of one million. More broadly, the amount of computation used to train the most capable AI models has doubled about every six months from 2010 to today.
The physical manifestations of the bitter lesson are AI data centers: industrial facilities requiring tens of thousands of specialized computer chips, running 24 hours a day, and requiring enormous amounts of electricity. All told, a single AI data center can consume tens of megawatts of power, akin to the consumption of a city with 100,000 inhabitants. Such data centers are needed both to train new AI models and to deploy them at scale in applications like ChatGPT.
Given these trends, we are likely on the cusp of a computing infrastructure build-out like no other. Technological, security, and feasibility challenges will follow. We will need to efficiently network millions of powerful chips together, design infrastructure to protect AI models from sophisticated cyberattacks, and generate massive amounts of energy necessary to run the centers.
These technical challenges create policy challenges: Are we creating conditions such that data centers for training future generations of the most powerful models are built here rather than overseas? Are incentives, regulations, and R&D support in place to ensure private actors develop technology with appropriate security? Is the supply of clean energy increasing at the pace necessary to fuel the grand projects of the next decade: the electrification of transportation, the manufacturing renaissance, and the AI infrastructure buildout?
Over the coming weeks, IFP will propose a plan for achieving these goals. In the first piece of the series, Brian Potter examines the technical challenges of building AI data centers. In the second piece, we will examine the key trends influencing the shape of AI development and what they will mean for building the data centers of the future. Lastly, we will analyze the policy levers available to the United States to ensure that future generations of data centers are built securely here.
Although the US has been the innovation leader in artificial intelligence thus far, that status is not guaranteed. Across the value chain, other countries are investing heavily to capture a portion of the industry, whether it’s China investing in manufacturing new AI chips, or Saudi and Emirati sovereign wealth funds investing in new data centers.
We can hardly expect to control 100% of the AI market for years to come, nor should we. But only thoughtful, concerted policy action can ensure the future of AI is built responsibly in the United States. This series will offer a roadmap for maintaining American technological leadership.
Seven Frequently Asked Questions About NEPA
What’s the problem with NEPA?
Is the National Environmental Policy Act (NEPA) a problem, blocking clean energy and harming our ability to build infrastructure? Or is it a robust defense against environmental devastation?
The debate can be murky, and good data is hard to find. Here, we’ve answered some of the most common questions about the NEPA process and its impacts. A concise FAQ is followed by more detailed answers.
Are NEPA delays only caused by staff shortages?
Staffing shortages are one cause of delays, but they’re also a symptom of NEPA’s overly burdensome process. Agencies are understaffed largely because the standards for NEPA reviews have become increasingly unattainable. For environmental impact statements (EISs), page counts have increased from a handful of pages in 1970 to 1,703 pages in 2020. Former EPA General Counsel Donald Elliot estimated that 90% of the details in NEPA reviews are only included to ward off litigation.
The IRA has already set aside more than $1 billion in funding for federal permitting. But even with those additional resources, Secretary Granholm still cites permitting as the “biggest bottleneck” for clean energy deployment. Key permitting agencies like FERC have previously cited difficulty in finding experts who can navigate the permitting process.
Strengthening agency capacity to reduce delays has to come from both ends: Agencies need more staffing resources and expertise, but they also need increased discretion via legal deference to make decisions in the public interest. Efficiently processing permits requires both adequate staffing and a process that doesn’t impose impossible standards.
Is NEPA the problem, or are other permitting laws more onerous?
NEPA is much more than an umbrella statute. While some argue that NEPA is unfairly blamed for problems caused by all permitting laws, NEPA itself requires significant review beyond the other permits included in final NEPA documents (e.g., permits for Section 7 of the Endangered Species Act). NEPA requires an analysis of all significant environmental impacts, as well as of all reasonable alternatives to the proposed action. NEPA is also the most frequently litigated statute.
NEPA’s costs derive from the open-ended nature of environmental review requirements: How much review is enough? What counts as substantial environmental impacts? Without a substantive answer to these questions, NEPA creates an avenue for obstructionists to sue to block projects on procedural grounds.
Does NEPA limit clean energy deployment?
Yes, NEPA disproportionately harms clean energy and will increasingly be a drag on the clean energy transition. Historically, NEPA was a tool for slowing fossil projects. However, because of the enormous amount of infrastructure demanded by the clean energy transition, NEPA has become a much heavier burden on clean energy.
Perversely, NEPA reviews scrutinize clean projects more heavily than fossil fuel projects. Their large surface footprints force clean energy projects to complete more Environmental Impact Statements (EISs) — the most burdensome reviews under NEPA. Current federal data trackers show 62% of energy-related projects undergoing EIS review are for clean energy, while only 16% are for fossil fuels. This disparity has led to solar projects being sued even more often than pipelines, and both solar and wind are canceled at higher rates than fossil projects.
Litigation of NEPA reviews is a huge burden on clean energy projects. In recent years, obstructionists have sued and delayed dozens of major projects:
On the flip side, the fossil industry has achieved significant streamlining relative to clean energy. Fossil projects are now overwhelmingly approved under shorter environmental assessments (EAs) rather than lengthy EISs. The fossil industry has also been given legislative categorical exclusion carve-outs that rapidly approve certain classes of fossil projects. For example, as of March 2024, the BLM NEPA register of ongoing fossil permits shows five EISs compared to 381 EAs and 77 categorical exclusions.
Can’t NEPA delays be solved with more community engagement?
The reduced delays associated with increased community engagement are a product of the flaws in the existing permitting process, rather than a tool to reduce veto points in the NEPA process.
Calls for mandating early engagement misunderstand why engagement reduces litigation: The current permitting process is a gauntlet of veto points, and developers and agencies have to placate potential obstructionists to avoid delays. Early engagement can help in a limited number of cases, because it helps avoid opposition on the back end of the process, but ultimately vocal minorities can still use existing veto points to block development. In other words, early engagement is the best way of navigating a flawed system, but it is not a solution for fixing the system itself.
Many important projects have been held up even after doing significant early community engagement:
- Vineyard Wind was sued four separate times, even after the project voluntarily created a community benefit agreement and won a local environmental group’s endorsement.
- The SunZia Transmission Line is being sued after 15 years of permitting and a voluntary community benefit agreement.
- The Cardinal-Hickory Creek transmission line has been repeatedly delayed by environmental lawsuits, even after the project went through eight years of permitting review and took significant steps to increase wildlife preservation on net.
- Cape Wind was killed by NIMBY lawsuits despite having almost 80% public support in Massachusetts.
Isn’t transmission permitting reform sufficient to reach U.S. decarbonization goals?
Transmission reform and permitting reform are both vital for a clean energy buildout. The two are necessary companions, clean energy will need NEPA streamlining alongside transmission reform. In fact, many transmission lines are held up by the NEPA process and ensuing litigation. Major transmission lines experience some of the longest NEPA delays, and can even be forced to complete multiple NEPA reviews. If reforms succeed and create federal backstop authority for large interregional transmission lines, virtually all large transmission projects will trigger NEPA. Furthermore, if technologies like long-duration energy storage alleviate some of the demand on interregional transmission, permitting requirements like NEPA will still be triggered as many next-generation technologies rely on direct federal funding.
How big of a problem is NEPA, really?
NEPA has significant costs, both for developers and agencies. The federal government conducts substantive reviews (EAs or EISs) for virtually all actions affecting the built environment — roughly 12,000 substantive NEPA reviews per year. NEPA reviews create enormous uncertainty: Developers don’t know how long review preparation will take, whether political interference will hold up their review, or if litigation will repeatedly halt construction. NEPA also harms federal agencies by tying up staff time and forcing agencies to become risk-averse in order to avoid litigation.
Some have argued that NEPA isn’t a problem because 95% of NEPA reviews are categorical exclusions. But this argument is extremely misleading: 95% of NEPA reviews are categorical exclusions, because even though NEPA was meant to apply only to major projects, regulatory accretion has expanded NEPA to technically apply to every action the government takes. This massively inflates the total number of actions that get NEPA “reviews.” Categorically excluded actions include thousands of irrelevant or trivial actions like paying staff, collecting data, making arrests, hosting picnics, and executing financial transfers. These trivial actions are nonetheless counted for purposes of estimating how many actions are categorically excluded. The 95% statistic is misleadingly used to suggest that very few reviews are burdensome. In reality, virtually every federal project in the built environment requires a substantive NEPA review (i.e., an EA or EIS).
Can permitting reform accelerate projects without harming community input?
Yes, smart reforms are possible: Policymakers can reform NEPA to remove obstruction while simultaneously strengthening community input and environmental justice outcomes.
Despite being heralded as a tool for communities, NEPA litigation is most often conducted by public interest and business groups. A study from the Council on Environmental Quality shows that just 2.4% of NEPA lawsuits came from tribes, while nearly 50% came from public interest and business groups. Moreover, NEPA litigation requires plaintiffs to put up significant legal fees, tilting the process in favor of powerful stakeholders. For example, clean energy projects like Cape Wind have been blocked by wealthy landowners, despite broad local and regional support.
Environmental justice also requires spurring investments for marginalized communities. Streamlining NEPA reviews to remove legal obstruction can help accelerate the benefits of projects. Where fossil projects created costs for local communities, the clean energy buildout is an opportunity for win-win investment. Compared to polluting fossil fuels, solar and wind have comparably few harms and can even help reduce pollution that disproportionately harms marginalized communities.
What is needed for environmental justice are policies like the energy communities tax credit bonus and the Justice40 Initiative, which incentivize better planning and steer projects towards just outcomes. Policymakers should build on these targeted policies and support opportunities for community benefit agreements that create win-win deals for communities and developers.
Policymakers can do two things simultaneously: Protect what NEPA does well by expanding opportunities for public engagement and incentives for community benefit agreements, while also removing legal obstruction and accelerating the delivery of important infrastructure.
Reform possibilities for NEPA
NEPA is ripe for reform. Instead of treating NEPA litigation as an indiscriminate veto point, policymakers should reconceptualize litigation as a check against lax reviews. Placing a time limit on injunctive relief would set a deadline for the judicial process but still offer plaintiffs an opportunity to be heard in court. Alternatively, increasing legal deference to agency decisions would maintain open challenges but ensure that only erroneous NEPA reviews get remanded.
Judicial reforms could be paired with reforms to strengthen community input. For example, policymakers could improve the public comment process by increasing opportunities for engagement during the planning and draft phases of the NEPA process. Policymakers could also consider reforms to encourage the use of community benefit agreements, to ensure affected communities benefit from socially valuable projects.
The National Environmental Policy Act has laudable goals of environmental consideration in planning and community input. But over time, the need for agencies to prioritize avoiding litigation rather than actual environmental tradeoffs means the law’s costs are hurting the clean energy transition. If policymakers want to strike a deal that balances community input with a more timely process, taking clear account of these costs is vital.
We Must End the Litigation Doom Loop
This piece was originally published in Slow Boring on April 28th.
A federal judge recently issued an injunction to block the approval of a power line that would have connected 161 renewable energy projects to the electric grid, providing more clean energy to consumers in Minnesota, Iowa, and Wisconsin. This is the second time this project, known as the Cardinal-Hickory transmission line, has been blocked by an injunction in the past five years.
The project’s legal travails illustrate a growing problem for our nation’s clean energy transition: an endless cycle of agency review and litigation holds up clean energy projects. This “litigation doom loop” not only halts projects in court, but also makes it difficult to convince investors to hazard the trillions of dollars necessary to build new infrastructure for a clean energy economy. To free new infrastructure from this paralyzing cycle of endless review, Congress must take bold action.
Unfortunately, most of its current proposals are far too timid to significantly speed up clean energy deployment.
The Cardinal-Hickory transmission line exemplifies the litigation doom loop. After completing the strictest, most comprehensive form of environmental review between 2016 and 2020, the line was approved jointly by four federal agencies. But after outside groups sued under the National Environmental Policy Act, a judge enjoined the approval and sent it back to the agencies for further review. Now, after a new plan was devised that would increase land for conservation, the project has again been enjoined. After thousands of pages, years, and untold labor, it’s clear that no level of review will be sufficient for the groups suing. The delay has real costs — each year could add 150,000 to 2 million tons of carbon. Cardinal-Hickory is hardly the only project that has been stuck in the litigation doom loop: The Cape Wind project would have provided clean power to over 200,000 homes, but was stuck in litigation for more than 15 years before being shuttered.
This is an untenable situation. The Cardinal-Hickory line has essentially no end in sight — developers might turn back for another round of review, but what confidence will they have that good faith mitigations will be sufficient? Even worse, how might other developers view this episode when making their own investment decisions?
The transition to clean energy depends on quickly building hundreds of projects like the Cardinal-Hickory transmission line. The generational investments in the Inflation Reduction Act and the bipartisan infrastructure law were intended to support that goal. But a central challenge of the energy transition is that while traditional fuels such as oil and coal are easy to transport by existing infrastructure including roads, rail, and waterways, solar and wind power can only reach consumers if we build out new interstate power lines. Newer clean energy technologies like carbon capture and hydrogen also depend on new pipelines. Fundamentally, the energy transition depends on convincing investors to risk trillions of dollars on new, long-distance clean energy infrastructure, and ensuring it receives prompt permission to build.
Clean energy projects are the very projects most likely to get stuck in the litigation doom loop. A recent Stanford study found that clean energy projects are disproportionately subject to the strictest level of review. These reviews are also litigated at higher rates — 62% of the projects currently pending the strictest review are clean energy projects. The best emissions modelers show that our emissions reductions goals are not possible without permitting reform.
The good news? Congress is interested in reforming the broken permitting process. It’s on Senator Carper’s (D-DE) bucket list before he retires next year, and Senators Manchin (D-WV) and Barrasso (R-WY) are actively working on a bill. Unfortunately, Congress has thus far shied away from addressing the crucial roadblock illustrated by the Cardinal-Hickory saga: the litigation doom loop.
Congress has taken initial steps to speed up the federal government’s environmental reviews and is considering steps to speed up lawsuit filing, such as shorter statutes of limitation to challenge new projects. But none of those steps would prevent courts from holding up projects through years of litigation and repeated reviews. Investors will be less willing to risk millions for each clean energy project when they know that, no matter how much environmental review these clean energy projects undergo, courts can indefinitely prevent them from being built.
That is why we’re proposing a time limit on injunctions. Under our proposal, after four years of litigation and review, courts could no longer prevent a project from beginning construction. This solution would pair nicely with the two-year deadlines imposed on agencies to finish review in the Fiscal Responsibility Act. If the courts believe more environmental review is necessary, they could order the government to perform it, but they could no longer paralyze new energy infrastructure construction.
A Cheat Sheet for NEPA Judicial Reform
NEPA needs reform
Reforming judicial review is vital for fixing the National Environmental Policy Act, or NEPA. Litigation of NEPA reviews has become an avenue for abuse and obstruction. Lawsuits can target these procedural reviews to block the construction of important infrastructure projects. As federal agencies “litigation-proof” their environmental reviews, NEPA documents grow longer by the year, further delaying construction for major infrastructure projects. According to an estimate by former EPA general counsel Donald Elliott, roughly 90% of the details in NEPA reviews are included to preempt litigation.
This document assesses proposals for reforming the NEPA judicial review process. We evaluate these proposals for how well they would add certainty to the process, speed up agency reviews, and balance reform with the need for community input.
Reform mechanisms
Set a Time Limit on Injunctive Relief
Set a deadline on the length of time that courts are allowed to issue injunctive relief against a project. The time limit could start either at the beginning of review (at the Notice of Intent) and run for four years or at the end of the review (at the Record of Decision) and last for six months.
Make Courts More Deferential to Agencies
Congress could set higher standards for remanding NEPA permits. Different options include requiring the plaintiff to prove a majority of their claims against a NEPA document, to prove the issue at hand would have changed the agency’s final decision, or to prove the error in question affects a majority of environmental impacts analyzed of the document.
Create a Permitting Appeals Board
Congress could create an administrative board to handle cases brought against federal permitting decisions. A board made up of five expert judges would concentrate expertise and ensure consistent rulings.
Have FPISC Mediate Lawsuits
This would give the Federal Permitting Improvement Steering Council (FPISC) authority to mediate cases where courts have determined a permit is deficient. FPISC mediation would determine a remedy to repair the permit, and further judicial challenges would not be allowed.
Consolidate Lawsuits by Project
Congress could establish a procedure to join all legal claims against a NEPA permit and expeditiously consider all the claims at once.
Limit Preliminary Injunctive Relief
Congress could disallow preliminary injunctive relief — a stay on construction while a case waits to be heard in court — unless plaintiffs file their challenges within 60 days of the publication of an approval under NEPA.
Expedite NEPA Challenges to Originate in the Court of Appeals for the D.C. Circuit
Alternatively, Congress could require that all NEPA challenges begin in the Court of Appeals for the D.C. Circuit, regardless of project or sponsor location.
Expedite NEPA Challenges to Courts of Appeals
The legal process would be shortened if Congress required that all NEPA challenges begin in the appellate court of jurisdiction, rather than the district court level. This would mean only Courts of Appeals and the Supreme Court would have jurisdiction over NEPA cases.
Increase Requirements for Judicial Standing
Tougher standards to demonstrate judicial standing would limit the number of people eligible to file lawsuits. Proposals require that a lawsuit be based on a public comment plaintiffs submitted during the NEPA process, and that plaintiffs must be personally harmed by the project.
Shorten the Statute of Limitations
A shorter statute of limitations would condense the time period in which litigants can file new lawsuits against NEPA decisions. Proposals have ranged from three years to as short as 60 days.
Put Deadlines on Court Action
Congress would set deadlines for courts to review legal challenges to NEPA permits and issue decisions (e.g., six months). However, these deadlines are difficult to meaningfully enforce.
Evaluation criteria
Protects good-faith lawsuits?
Reforms should protect the opportunity for good-faith legal challenges to be heard in court.
Speeds judicial review?
Reforms should speed the legal process for reviewing a lawsuit against a NEPA permit.
Creates certainty for developers?
Reforms should create certainty in the permitting and judicial review process by lowering litigation risk and completing permits faster.
Is hard to circumvent?
Reforms should proactively cut off loopholes that obstructionists might use to evade reforms.
Reduces bad faith lawsuits?
Reforms should help lower the number of bad-faith lawsuits — legal challenges that are brought for the purpose of blocking a project, regardless of how thorough the review was.
Creates bright-line rules?
Reforms should be clear and unambiguous, and their effects should not be easily undermined via regulation or court interpretation.
Is harder to vacate?
Reforms should make it harder to win cases against a NEPA permit. Permits should only be vacated if agencies make significant errors.
Speeds agency reviews?
Reforms should empower federal agencies to speed their reviews by lowering litigation risk. This will help agencies meet their timelines established in the Fiscal Responsibility Act (one year for an EA, two years for an EIS).
Ends the “litigation doom loop”?
Reforms should unambiguously end the cycle of perpetual litigation and review.
The Talent Scout State
America is missing out on emerging talent
The success of American science depends on our ability to draw on the world’s talent, integrating them into the world’s best scientific institutions. A U.S. superpower has long been that more of the world’s globally mobile, high-skilled people come here than to any other country.
But while our immigration system already has numerous pathways for successful accomplished people to come, it does not consistently identify and attract young people with high potential. Instead, our immigration system is designed to attract a specific set of immigrants: successful people well into their careers, with a demonstrated track record of achievement. We should allow foreign-born scientists and researchers to do their pioneering work here in the United States, instead of welcoming them only after they have been recognized for their achievements abroad.
As other countries begin to treat the competition for talent seriously, both allies and competitors have begun to roll out migration programs explicitly aimed at recruiting high-potential talent. These programs are already bearing fruit: talented migrants increasingly choose to move to other countries. For example, OECD data shows that the U.S. has become a less attractive destination for STEM international students, compared to other OECD countries like the UK and Germany, which now attract about the same number of STEM students as the U.S.[ref 1] Our immigration system is letting promising early-career scientists, engineers, and researchers slip through the cracks.
Every stage of the U.S. recruitment funnel has leaks, starting at the university level. Universities generally receive full tuition from international students and have little incentive to bring in the most promising individuals, unless they can afford to pay tuition. Universities naturally care most about how a student will benefit the university, not how the student will benefit the country or world. To put it bluntly, schools are incentivized to use foreign students as cash cows, rather than to recruit and educate individuals with the most promise for the United States. Agrawal and Gaulé, two economists studying “invisible geniuses” — teenage prodigies who do not end up contributing to global science — found that the United States’ scientific enterprise is losing out on top-tier talent like winners of the International Math Olympiad, a highly competitive international math competition. In their survey of IMO medalists, “66% dream of studying in the U.S., while only 25% manage to do so.”[ref 2]
Even when talented young people can attend U.S. universities, they are quite likely to be sent home after they graduate. The H-1B program, the primary visa pathway for skilled talent to stay in the United States after graduation, essentially outsources recruitment to employer sponsors. When industry incentives discourage the selection of immigrants who generate large social benefits, the U.S. misses out on those immigrants entirely.
If the number of H-1B petitions exceeds the cap set by the United States Citizenship and Immigration Services (USCIS), visas are awarded through a lottery. The lottery means no weight is given to an application for a beneficiary who shows promise to make important contributions. It is not surprising, then, that Beine, Peri, and Raux find that only about 20% of foreign-born master’s graduates stay after graduating from U.S. universities and only about 10% of bachelor’s do so.[ref 3]
The O-1 program is much smaller in scale, and is designed for individuals of “extraordinary ability.” However, eligibility is limited to individuals who have already made significant contributions to their fields — not individuals with high potential to make future contributions.
Other options also have serious drawbacks when it comes to recruiting and retaining high-potential individuals. The employment-based green card categories are plagued by growing backlogs (reaching decades and centuries-long wait times for applicants from India and China). The J-1 Early Career STEM Talent Program is perhaps the one program designed for high-potential individuals but requires many participants to return to their home country for two years after the program is completed.
In short, our current system fails to proactively recruit high-potential individuals.
The case for emerging talent
Historically, scientific breakthroughs have come from the contributions of a relatively small number of highly productive scientists. The top 10% of scientists receive five times more citations throughout their careers than the other 90%.[ref 4] Outlier individuals make outsized contributions to their respective disciplines. As a result, it’s often worth the extra effort to find especially high performers with the potential to produce remarkable innovations. When these talents are not realized (sometimes called “lost Einsteins”),[ref 5] it is a loss for that individual and the world.
Not only does the country stand to gain if these superstars can build their careers here, but those careers will be vastly more productive than if they were built in other countries. Our knowledge networks, talent clusters, and scientific infrastructure are unmatched in the world.
As a case study, we can look at recent research on winners of International Math Olympiads (IMO), and how much more productive they are in the U.S. For equally talented IMO medalists, those who migrated to the United States were two to three times as productive as medalists who migrated to the UK, and six times as productive as medalists who stayed home.[ref 6]
The authors find that IMO scores are strongly predictive of future research productivity, indicating that it is feasible to make informed decisions about future potential. Further, they find that barriers to moving to the United States — mostly financial — blocked many medalists from offering their talents to U.S. science. If these prospective superstars are blocked from coming to the United States, we risk losing them to other countries, or, even worse, the world loses out on their potential altogether.
It is no surprise that the productivity of scientists is influenced by the people and institutions surrounding them. Research has consistently found that scientists working and living near one another improve each other’s productivity and effectiveness. Within these talent clusters, researchers can easily share theories, iterate on prototypes, receive feedback, and sharpen their ideas.[ref 7] Researchers located in hubs benefit from increased scientific collaborations and a higher concentration of resources, including funding.[ref 8]
New York and Silicon Valley are talent clusters: together, they host one-eighth of all STEM workers in the country, of whom 56% are foreign-born.[ref 9] But universities are another example of talent clusters. Many of the world’s top universities are in the U.S., and the Ivy League and large state universities are major beneficiaries of federal grant funding. 30% of academic research and development (R&D) spending occurs within the top 20 institutions, and about 80% is concentrated within the top 100.[ref 10]
When STEM talents come to regions where the most knowledge is being produced (universities, research labs, etc.), they can both perform better work and make the cluster itself work better.6 High-skilled migrants do not just do better work in a cluster than they would have done abroad; clusters make the researchers they collaborate with more productive. Not only do they bring new knowledge, but the exchange of ideas between native researchers and newcomers generates new ideas that neither party would have had on their own.
America already recruits many promising individuals through our world-class university system. But many of them can’t stay, especially if they want to do work outside academia where H-1B visas are not cap-exempt. If they do stay, visa restrictions limit their ability to commercialize their work or use it as the basis to launch new entrepreneurial ventures. And most disturbingly, our immigration system prioritizes demonstrated achievement, limiting our ability to recruit young and early-career talent in the first place.
It does not have to be this way. The fat tail of scientific impact suggests that the U.S. has much to gain if we can better identify and recruit future talent. But while the United States has rested on its laurels, other countries have begun experimenting with recruitment based on promise.
The UK introduced the High-Potential Individual (HPI) visa in May 2022.[ref 11] The visa enables recent foreign undergraduate degree holders from the top 50 international universities to stay in the UK for a minimum of two years without requiring job sponsorship.
Unlike the H-1B visa in the United States, which is tied to an employer and severely constrains the flexibility of beneficiaries to launch new startups, UK HPI visa holders have the freedom to engage in entrepreneurship and costlessly change employment.
Also in the UK, there’s the Global Talent Visa, launched in February 2020, designed for leaders and “potential leaders” in academia, research, technology, and the arts.[ref 12] One path to a Global Talent Visa is individuals with excellence in their respective fields, like Turing Awardees, Fields Medallists, and Nobel Prize Winners fit this profile. This pathway is analogous to the O-1A in the United States. The second pathway has no American analog: applicants who are not already leading their fields can be endorsed by leaders in the field who can attest that the applicant has high-potential.
The UK is not the only country with a newfound interest in recruiting based on potential. In April 2023, Japan started the Future Creation Individual Visa (J-Find) program for graduates from the top 100 universities, according to established rankings such as QS, Times, and the Shanghai Jiao Tong University’s Academic Ranking of World Universities.[ref 13] Holders of this visa can stay in Japan for up to two years, whereas graduates in the previous system had only 90 days to find work.
The United States can join the UK and Japan in a proactive approach to high-potential global talent. Below, we discuss how the United States can fill this major gap in our immigration system and grow its lead in the competition for talent.
Recommendation 1: Streamline and scale existing pathways
Perhaps the lowest-hanging fruit is to scale the one U.S. immigration program explicitly focused on early-career STEM talent: the Early Career STEM Research Initiative at the Department of State’s Bureau of Educational and Cultural Affairs. Launched in the summer of 2021, it aims to pair the organizations authorized to sponsor J-1 visas and run cultural exchange programs with companies who can host STEM opportunities, including research.
As an uncapped visa, the J-1 has significant potential to be scaled into an important pathway for young STEM talent. However, the J-1 is limited in duration and, in many cases, subjects participants to the requirement that they return home for at least two years after the conclusion of their program. This means that many participants are kicked out once they complete the program, and untold numbers of individuals who want to stay in the United States long-term are deterred from the program altogether. Fortunately, there are concrete steps to transform this visa into a predictable on-ramp for STEM talent.
First, we can significantly reduce the number of people who are subject to the two-year home residency requirement by updating the J-1 Exchange Visitor Skills List, the list of countries and skills which subject people to the requirement. The Biden administration has undertaken steps to start this process. As part of its campaign to attract AI talent to the United States, the AI Executive Order signed in October 2023 instructs the State Department to consider establishing criteria to determine the contents of the Skills List and to consider publishing updates.[ref 14] By making it more feasible to stay after the Early Career STEM Research Initiative, we can retain more participants and make the program more attractive in the first place.
Second, for those still on the Skills List, the U.S. government could streamline the process for granting Interested Government Agency Waivers. The Department of Defense Research & Engineering has a formal process, including a webpage that includes a description of the application process, an application checklist, and a sample sponsor letter.[ref 15] Other government agencies (natural candidates are the NSF, NIH, DOE, NIST, and NASA) can follow this example and establish a formal process including published considerations for eligibility that would replace the ad hoc and opaque approach currently in place with something predictable, transparent, and efficient.
The J-1 Early Career STEM Research Initiative shows the U.S. is thinking about young talent. Some tweaks could allow them to scale it into a formidable program.
Recommendation 2: Experiment with talent identification
If the United States, or any country can successfully identify and recruit individuals with high potential, they must be able to answer: what observable characteristics can the government rely on today to predict future success?
The academic literature to answer this question is growing, but the field is still nascent. Agarwal and Gaulé’s research on IMO participants discovered that students who performed particularly well-published papers at higher rates were more productive than their counterparts in their respective fields. Each additional point on the IMO was correlated with a 2.6% increase in mathematics publications and a 4.5% increase in citations.[ref 16] IMOs are relatively small, but the paper demonstrates some key points: we already know some predictors of success, and evidence can be leveraged to help identify other predictors. Funding research and pilot programs to further this literature would give practitioners a base of knowledge to use to experiment with real-world talent identification.
The fastest way of developing knowledge will simply be through practical experimentation. Allowing a variety of talent identification methods to compete will enable a diversity of approaches to be compared, evaluated, and combined. Of course, experiments should not focus excessively on optimizing a series of metrics at the expense of risk-taking on high-potential individuals, but they should allow for the analysis of alternative identification approaches. Such methods could include both metrics or tests that identify potential talent, and reliance on the discretionary judgment of experts. The ultimate goal would be to try different approaches, track outcomes, and improve over time.
Here are some natural places lawmakers should begin experimenting in talent scouting approaches:
- Use outside recruiters. In the UK, the Global Talent Visa offers visas to those who can get 3 letters of recommendation from eminent experts attesting to their promise and potential. By contrast, the O-1 offers visas to those who, in addition to meeting other criteria, get letters of support attesting to significant contributions already made to the field. Reforming the O-1 so that it allows experts to testify to extraordinary potential — or offering a new pathway — would allow the government to outsource and decentralize some of its scouting to those most qualified.
- H-1B selection. Reforming H-1B selection could provide a great opportunity for serious experimentation, given that the number of available visas would allow for large sample sizes. Numerous proposals by members of Congress, previous presidential administrations, and think tanks would move the H-1B away from a lottery and towards a system that prioritizes applicants along some dimension of talent or skill. These proposals have included preference systems, salary-based rankings, DOL wage-level rankings, among other ideas. However, more creative points-based systems could also be designed to give preference to high-potential talent.
- Embed Talent Scouts in the State Department and Department of Defense. The Office of Science and Technology Cooperation in the State Department would be a natural fit for talent scouts housed with U.S. consulates abroad. For example, through the Global Innovation through Science and Technology (GIST) Initiative, State Department officials have engaged with millions of science and technology entrepreneurs worldwide. Embedding talent scouts to both inform promising candidates about immigration opportunities would likely allow us to recruit more of them. Empowering them with the authority to make referrals to USCIS for visas could multiply the potential still further. Such officials could also be tasked with engaging with International Mathematical Olympiad (IMO) talent by establishing partnerships with participating countries, as well as other international competitions.
The Department of Defense would be another natural place for talent scouts. This would be nothing new for the national security community. During Operation Paperclip at the end of the Second World War, the national security community shared dossiers on foreign scientists to identify critical experts and engineers who should be exfiltrated from Europe to work on U.S.-based defense-related projects. As great power competition returns, this model may need to be dusted off and brought back into play. On a less ambitious scale, the National Security Innovation Pathways Act proposed to authorize the Department to identify critical technology experts who could receive green cards. This could be structured in a way that gave the DoD resources to evaluate individuals and figure out the particular experts they need — and to experiment.
There may be a science to talent identification, but there’s also an art. Success at proactive talent identification cannot happen without individuals actually scouting talent. Congress should get the experiment running.
Recommendation 3: Leverage the non-profit sector
Beyond government, the private and non-profit sectors can play important roles in recruiting and identifying talent.
First, private universities can do a better job of ensuring that the U.S. scientific enterprise is not missing out on some of the most promising minds around the world who come from poor backgrounds. While the Ivy League and similar elite institutions offer premier education, prestige, and research opportunities, these opportunities remain largely inaccessible for non-wealthy internationals due to financial barriers. When most of our universities would reject a modern-day Ramanujan, something is broken.
Agarwal and coauthors identify financial constraints as the key reason International Math Olympiad winners who want to study in the United States don’t do so. Estimates indicate that reducing financing constraints for top overseas talent could raise worldwide scientific output in future cohorts by 42%.[ref 17]
Many schools have to decline admission to qualified international students due to budgetary constraints. Private universities can address some of this. Only seven U.S. institutions, mostly private research universities, are currently need-blind and meet the full demonstrated needs of international students, including Yale, Dartmouth, Harvard, MIT, and Princeton. Other private research universities should join them.
Philanthropic institutions can also play a role in identifying talent with high potential and lowering the financial constraints holding them back from contributing to the United States. In October 2023, The Global Talent Lab launched the “Backing Invisible Geniuses” (BIG) UK Programme in partnership with the UK Department for Science, Innovation and Technology.[ref 18] BIG provides financial support and a network for top-ranking International Science Olympiad competitors to study at top UK universities. The program supports students who have excelled in STEM, but who otherwise could not attend selective universities in the UK.
Additionally, Schmidt Futures’ Rise scholarship serves a similar goal of identifying talented young people, including international students, and financially supporting their studies at top universities. Rise is one of several philanthropy-funded scholarship programs, including the Mastercard Foundation Scholarship, directed towards African nationals,[ref 19] and the Open Philanthropy Undergraduate Scholarship, exclusive to international students intending to study at top universities in the U.S and UK.
Philanthropies could also support participation in math and science competitions abroad to help with identification. They can increase international participation in Olympiads, build new ones like the Pan African Mathematics Olympiads, and start competitions like the Siemens competition, Regeneron Science Talent Search, International Science and Engineering Fair, and Google Science Fair.[ref 20]
Historically, U.S. success in scientific and technological innovation has been heavily reliant on the country’s unparalleled ability to attract and integrate the world’s brightest minds. Decentralized recruitment has allowed the United States to make use of the diffuse networks of its employers, universities, and other institutions. However, relying on these organizations alone for recruitment and sponsorship has also let their interests dictate the selection of migrants.
As a result, the U.S. is missing out on high-potential immigrants who will not immediately benefit a sponsor. And it is operating well below its potential to use these talented individuals to drive U.S. science and innovation. Addressing this gap requires overhauling outdated immigration policies to create pathways that value potential as much as past success.
Confronting this challenge is not merely about maintaining a competitive stance or gaining a strategic advantage. It’s about unlocking the potential of the world’s talent, thereby advancing science and research not just for the United States, but for the world.
Moving Past Environmental Proceduralism
This article originally appeared in Asterisk Magazine, Issue 5: “Mistakes.”
1970 was a landmark year for environmental legislation. While the U.S. environmental movement had been building for decades, a series of industrial catastrophes in the late 1960s coupled with growing awareness of the impacts of pollution turned the natural world into a top policy priority.
As it was not yet a politically polarized issue, Democrats and Republicans alike jockeyed to prove how much they cared. The result was, in the words of environmental law professor Zygmunt J.B. Plater, “a parade of regulatory statutes…the likes of which we will probably never see again.” Congress passed the National Environmental Policy Act in 1969, the (second) Clean Air Act in 1970, the Clean Water Act in 1972, and the Endangered Species Act in 1973. In 1970, Richard Nixon created the Environmental Protection Agency by executive order. This raft of policies profoundly reshaped our society, and specific pieces contributed to the restoration of the natural world.
But what activists and legislators at the time saw as the most important of these new laws contributed relatively little to the way America tackled the greatest environmental challenges of the last half century. In many of the most notable successes, like cleaning up the pesticide DDT or fixing the hole in the ozone layer, what moved the needle were “substantive” standards, which mandated specific outcomes. By contrast, many of the regulatory statutes of the late 60s were “procedural” laws, requiring agencies to follow specific steps before authorizing activities.
Today, those procedural laws make it much harder to build the new infrastructure needed to avert climate change and decarbonize. The laws created by the environmental movement now harm the environment. So how did we get here? What could the environmentalists of the 60s and 70s have done differently? And what does that tell us about environmental regulation going forward?
How we got here
The turn-of-the-century conservationists largely accepted a distinction between “sacred” and “profane” lands. Inspired by figures like Henry David Thoreau, who wrote that “In Wildness is the preservation of the World,” the movement aimed to keep certain lands wild in perpetuity, while encouraging industrial development in the urban core.
But in the post-war world, heavy industry and conservation began to appear fundamentally opposed. Use of pesticides greatly expanded. Smog billowed out of cities and covered vast swathes of the country, both sacred and profane. Silent Spring, published by Rachel Carson in 1962, drew attention to the negative effects of DDT, showing that the impact of toxic chemicals could ripple across the entire food chain. Carson’s book became an unexpected hit, selling more than a million copies in just two years.
Where the original conservationists were motivated to protect wilderness far from civilization, the new environmentalists saw even urban landscapes being tarnished. David Brower, the prominent environmentalist and executive director of the Sierra Club from 1952–1969, was radicalized by changes to the surroundings of the San Francisco Bay area, where he grew up. “We built here for the view of San Francisco Bay and its amazing setting,” he said in 1956. “But today there is no beautiful view; there is hideous smog, a sea of it around us. ‘It can’t happen here,’ we were saying just three years ago. Well, here it is.” John Hamilton Adams, the first director of the Natural Resources Defense Council (NRDC), left his role as Assistant U.S. Attorney for the Southern District of New York after watching raw sewage float down the Hudson River.
In 1960, a Gallup poll showed that just 1% of Americans saw “pollution/ecology” as an important problem. By 1970, 25% did. Rising environmental concern merged with a national climate of protest and activism. But although Silent Spring drew massive attention to environmental issues, newly minted activists struggled to change laws. Five years after the book’s publication, DDT remained legal in every state in the U.S. — so activists tried a different approach.
In 1967, attorney Victor Yannacone and his wife sued the Suffolk County Mosquito Control Commission for polluting Lake Yaphank in New York with DDT. Although this kind of suit was considered “just this side of bomb throwing” in polite circles at the time, Yannacone called litigation “the only way to focus the attention of our legislators on the basic problems of human existence.” He ultimately lost in court, but Yannacone secured a temporary injunction to stop the Mosquito Control Commission from using DDT, and the commission later decided to end its use. Legal pressure could bring results.
Yannacone’s strategy to “sue the bastards” became a rallying cry among activists. Yannacone would go on to co-found the Environmental Defense Fund (EDF), which was quickly joined by the NRDC and the Sierra Club as environmental organizations focused on winning battles in the courts. But winning cases proved difficult. It was often hard to find litigants who had suffered specific harms and had the evidence to prove it.
Meanwhile, major environmental incidents kept piling up. In 1969, the Cuyahoga river caught fire in Cleveland, and a blow-out on an offshore platform caused a huge oil spill off the coast of Santa Barbara, galvanizing the public with images of burning rivers and oil-covered seabirds. Congress and President Nixon faced intense pressure to do something. When Senator Gaylord Nelson had introduced a bill to outlaw the use of DDT in 1965, he failed to find a single co-sponsor. By 1970, 8,000 environmental bills were introduced in a single congressional session. The most significant of these new laws was actually passed the previous year: the National Environmental Policy Act, often referred to today as the “Magna Carta” of environmental law.
NEPA’s stated goal was to “encourage productive and enjoyable harmony between man and his environment, to promote efforts which will prevent or eliminate damage to the environment and biosphere and stimulate the health and welfare of man; [and] to enrich the understanding of the ecological systems and natural resources important to the Nation.” This sweeping language was found too vague to enforce by the courts, and was ignored by federal agencies.
But a last-minute addition to the bill gave environmental activists an unexpected boon. In the “environmental impact statement” provision, there was a strict procedural requirement that federal agencies consider the effects on the environment of any major action and produce a “detailed statement” of the likely effects. The requirement wouldn’t itself stop agencies from acting, merely delay them until they completed their detailed statement. This requirement was added to NEPA at the suggestion of Lynton Caldwell, an advisor to the bill’s sponsor, Henry “Scoop” Jackson. Caldwell believed that without some “action forcing mechanism,” the high-minded ideals that NEPA espoused would amount to little.
This requirement originally received little notice. It was not covered in any major media publication. In Congress, it received “neither debate, nor opposition, nor affirmative endorsement.” Caldwell would later state that “most [members] had never really understood the bill and only agreed to it because it was from Jackson; it was about the environment which was a very ‘hot’ issue at the time; and it was almost Christmas and they wanted to get home.”
Not until several months after NEPA was passed did environmental groups realize what a potent weapon they’d been handed. By forcing disclosure of the potential negative environmental effects of an action, NEPA sparked public opposition to projects that might otherwise have gone unnoticed. Agencies increasingly found it politically difficult to take actions that might harm the environment. Activists were able to force agencies to consider ever more environmental effects, and stopped projects until they did so.
The activist courts of the 1970s expanded NEPA’s remit. In a landmark 1971 decision, Calvert Cliffs Coordinating Committee v. Atomic Energy Commission, the court ruled that NEPA’s procedural requirements “established a strict standard of compliance.” A government agency couldn’t simply produce a statement of environmental effects and then stick it in a filing cabinet. In what became known as the “hard look” doctrine, the statement had to be sufficiently integrated into the decision-making process to satisfy the courts.
The co-founders of the Natural Resources Defense Council would later say that “the importance of NEPA cannot be overstated” and that NEPA was “the core of our work.” Procedural laws like NEPA remain central to modern environmental activism. However, many of the new environmental laws passed in the 1970s were not procedural but substantive: instead of establishing a process that must be followed, they regulate what people and organizations are allowed to do. These include the Clean Air Act, which set specific limits on the allowable levels of several airborne contaminants, and the Clean Water Act, which prevents the discharge of pollution into surface waters. Procedural and substantive laws have represent very different approaches to environmental regulation — and in the following decades, they’ve had very different consequences.
The pros and cons of proceduralism
There are practical reasons for procedural laws to dominate environmental regulation. Proceduralism is a flexible tool that can be adapted to circumstances as needed. The uncertain nature of science, and the time it takes to observe potential long-term effects, mean that regulations can take years to author. Meanwhile, industry introduces thousands of new chemicals every year, and constantly changes the formulation of older ones.
By contrast, substantive environmental laws that regulate specific actions or pollutants may have a hard time keeping up as the world changes. When the EPA began to write regulations for 65 toxic water pollutants, the process took 11 years. Afterward, it was forced to write another round of regulations for toxic chemicals that had been introduced in the interim. And while substantive laws are often reactive, addressing problems as they are revealed, procedural laws force agencies to address environmental impacts that could happen in the future.
However, this very flexibility is the weakness of procedural regulation. Because it can be used against anything, laws like NEPA end up being used against everything. Every project faces some opposition, and laws like NEPA empower opponents at the expense of everyone else who would benefit. Because NEPA’s provisions are purely procedural, it can’t be used to stop projects. But it can slow them down (by requiring ever-more detailed environmental impact statements) to the point where it becomes infeasible to pursue them, a strategy on which activists have increasingly relied. The Forest Service estimates some forms of review cost one million dollars.
The fact that NEPA could be leveraged to block almost any new construction suited the increasingly “anti-growth” wing of the 1960s environmental movement. The 1968 book The Population Bomb, which argued for forced sterilization to limit population growth, was written at the suggestion of David Brower. In the 1970s, activists began to oppose new housing development in California and other states on environmental grounds. The co-founder of Greenpeace International, Rex Weyler, would go on to become an advocate of “degrowth,” a movement dedicated to “reduction in production and consumption” in the name of environmental justice.
In some cases, the consequences of long reviews have been richly ironic. Delays in the NEPA process for the prescribed burning of the Six Rivers National Forest resulted in the wildfire that the prescribed burning was meant to prevent. And as more procedural lawsuits take place, procedural requirements get ever stricter, increasing the permanent tax on new building and government actions.
This procedural burden weighs especially heavily on new players and novel technologies that haven’t had time to work the system. Oil and gas drilling companies, for instance, have been granted several NEPA exceptions that reduce the requirements for things such as drilling exploratory wells. Geothermal energy, on the other hand, does not receive such exceptions, even though it involves drilling wells like those for oil and gas. Procedural requirements, like most regulations, heavily favor the incumbents.
Stopping climate change requires building hundreds of billions of dollars of new infrastructure. Procedural regulation makes that task far more difficult. Going forward, we can draw on lessons from some of the biggest environmental wins of the last fifty years. A broad range of successes have been built on substantive regulations: from the elimination of leaded gasoline to the end of acid rain and the hole in the ozone layer.
Lead poisoning
Multiple studies show a relationship between lead exposure and higher crime rates. Lead poisoning also leads to birth defects and slowed growth and development. After the Flint water crisis, fertility rates decreased by 12% and birth weight fell by 5.4%. Housing remediation in Rhode Island to remove lead paint accounted for roughly half of the total decline in the black-white test score gap over the same period.
One of the EPA’s first major actions was to mandate the phase-out of lead in gasoline in the Clean Air Act of 1970. In 1973, the EPA required refiners to begin producing unleaded gasoline, and leaded gasoline was banned for vehicles beginning with model-year 1975.
The EPA also introduced a unique trading program to help smaller refineries manage the high costs of compliance. Traditional cap-and-trade systems allocate a fixed number of permits tied to emissions, incentivizing producers to find cleaner substitutes. Instead, the EPA allowed refineries to earn lead credits by producing gasoline with lower lead content than the upper threshold permitted. The credits could be sold or banked for future use. This approach incentivized refineries to reduce lead content sooner than required, facilitating a quicker phase-out of leaded gasoline. The program both achieved its environmental goals and proved economically efficient, encouraging the development of cost-saving technologies and demonstrating that a tradable emission rights system was viable.
Simultaneously, the Lead-Based Paint Poisoning Prevention Act was passed in 1971, prohibiting the use of lead-based paint in federal buildings and projects. By 1978, the Consumer Product Safety Commission banned the sale of lead-based paint for use in residences and in products marketed to children. As a result of these policy efforts, the average lead concentration in the blood of children in the U.S. fell by 95% between 1978 and 2016.
DDT and CFCs
DDT (dichlorodiphenyltrichloroethane) is a synthetic chemical compound that was widely used as an insecticide. It was first synthesized in 1874, but its effectiveness as an insecticide was not discovered until 1939. DDT was extensively used during World War II to control malaria and typhus among civilians and troops. Following the war, it became widely used in agriculture as a pesticide. The chemical compound proved so useful that its discoverer — a Swiss scientist named Paul Müller — was awarded a Nobel Prize in 1948.
But as scientific evidence of DDT’s toxicity to humans and other wildlife continued to mount, Nixon’s EPA began a review of DDT’s use. After a lengthy set of hearings and legal battles, EPA Administrator William Ruckelshaus announced the cancellation of all DDT use in the United States, except for emergency health-related uses and certain other exceptions.
While DDT was a significant local pollutant, the world faced a truly global threat during the ozone depletion crisis in the late 20th century, caused primarily by the release of chlorofluorocarbons (CFCs) and other ozone-depleting substances (ODS). These chemicals, found in products like refrigerants and aerosol sprays, would rise to the stratosphere when released. There, they were broken down by ultraviolet (UV) radiation, releasing chlorine atoms that destroyed ozone molecules. The ozone layer is crucial for life on Earth as it absorbs most of the Sun’s harmful UV radiation.
In 1974, scientists Mario Molina and Sherwood Rowland demonstrated the damaging impact of CFCs on the ozone layer, leading to increased scientific investigation and public concern. The discovery of the Antarctic ozone hole in 1985 served as an alert to the scale of the problem, leading to an unprecedented international response.
The primary global effort to combat ozone depletion was the Montreal Protocol, adopted in 1987. This international treaty aimed to phase out the production and consumption of ODS, including CFCs. It has been signed by 197 countries and is the first treaty in the history of the United Nations to achieve universal ratification. The agreement has undergone several amendments to include new substances and accelerate phase-out schedules. Thanks to the Montreal Protocol, the levels of ODS in the atmosphere have significantly decreased, and the ozone layer is gradually recovering.
Acid rain
In 1980, Canada’s Environment Minister John Roberts called acid rain “the most serious environmental threat to face the North American continent.” Acid rain is a type of precipitation with high levels of sulfuric and nitric acids, resulting from the atmospheric reactions of sulfur dioxide (SO₂) and nitrogen oxides (NOx), which are emitted by burning fossil fuels. These pollutants react with water vapor and other substances in the atmosphere to form acids. When these acidic compounds fall to the ground with rain, snow, or fog, they can damage forests, aquatic ecosystems, and erode buildings and monuments.
In the U.S., the significant reduction of sulfur dioxide (SO₂) pollution can be largely attributed to the Clean Air Act Amendments of 1990 (CAAA), which introduced the Acid Rain Program, a pioneering market-based cap-and-trade system targeting SO₂ emissions from electric power plants. Title IV of the CAAA set an explicit substantive goal: “reduce the adverse effects of acid deposition through reductions in annual emissions of sulfur dioxide of ten million tons from 1980 emission levels, and… of nitrogen oxides emissions of approximately two million tons from 1980 emission levels.” Under this system, power plants were allocated a certain number of emission allowances. Plants that succeeded in reducing their emissions below their allowance levels could sell their excess allowances to others. This market-based approach created a financial incentive for emission reductions.
Low-cost technological advancements played a crucial role in enabling companies to meet the SO₂ emission cap. The widespread installation of flue-gas desulfurization units (commonly known as scrubbers) in power plant smokestacks effectively removed SO₂ from emissions before their release into the atmosphere. According to the EPA, the most common type of desulfurization units are “wet scrubbers,” which remove greater than 90% of SO₂. This technology, alongside a shift towards the use of low-sulfur coal, significantly contributed to the reduction in emissions. Increasing reliance on natural gas and renewable energy sources also helped reduce sulfur dioxide emissions.
The reduction of nitrogen oxide emissions was also achieved through a combination of legislative actions and technological advancements. The Clean Air Act Amendments of 1970, 1977, and 1990, set stringent national air quality standards and required significant reductions in NOx emissions from power plants, industrial facilities, and vehicles. The NOx Budget Trading Program in the eastern U.S. began in 2003 and employed a cap-and-trade system similar to the one used for sulfur dioxide. This program set a cap on total NOx emissions and allowed entities to trade emission allowances, incentivizing reductions. Other factors also drove down NOx emissions, including the introduction of catalytic converters in cars and trucks in 1975, and the transition to cleaner fuels, like unleaded gasoline and ultra-low sulfur diesel.
Technological innovations to reduce emissions from stationary sources of air pollution have been pivotal in reducing NOx emissions. The adoption of selective catalytic reduction and selective non-catalytic reduction systems in power plants and industrial settings has effectively decreased NOx emissions by chemically transforming them into nitrogen and water. Additionally, the development and use of low-NOx burners and improved combustion techniques have minimized emissions at the source.
According to a 2005 cost-benefit analysis in the Journal of Environmental Management, the U.S. Acid Rain Program’s benefits were valued at $122 billion annually, far outweighing the costs — around $3 billion annually for 2010, less than half of the initial 1990 estimates.
Conclusion: reducing PM2.5 and greenhouse gas emissions
We need a new era of environmentalism that learns from the successes and failures of the past. Environmentalists rightly tout triumphs over acid rain, ozone depletion, DDT, and lead exposure. But these wins were not the result of preparing ever longer environmental impact statements for specific projects. They were the product of putting a price on pollution, via cap and trade programs, or outright banning a pollutant when necessary.
We’re already seeing some of these lessons applied to the fight against climate change. According to the World Bank, in 2023, 73 carbon pricing initiatives around the world cover 11.66 gigatons of CO2 equivalent, or 23% of global Greenhouse Gas emissions. The U.S. already has two major cap and trade programs for greenhouse gasses, one in California and a cooperative initiative between a dozen Eastern U.S. states called the Regional Greenhouse Gas Initiative. And even though carbon pricing is politically unpopular at the national level, policymakers in search of revenue-raising policies may come to see it as the least bad option in an era of high inflation and rising interest rates.
We need to build on the clean technology investments across the Inflation Reduction Act, the Infrastructure Investment and Jobs Act, and the CHIPS and Science Act. In terms of permitting reform, policymakers can go further than the changes in the Fiscal Responsibility Act earlier this year, which mandated shot clocks and page limits on reviews. A strict time limit on judicial injunctions would provide certainty for project developers that the review process will come to an end at some point. Bad faith opponents seeking to kill a project via a “delay, delay, delay” tactic will be stripped of their power.
These reductions in environmental proceduralism could be paired with increases in substantive environmental standards. In recent years, economists and public health experts have raised their estimates of the costs of local air pollution on human health and productivity. A deal that implements a price on greenhouse gasses and raises the National Ambient Air Quality Standards to reduce fine particular matter (PM2.5) while streamlining the permitting process to build new infrastructure would be a worthy environmental policy regime for the challenges we face today.
There’s no better illustration of the problems of proceduralism than an attempt to build a “green” paper mill (which produced recycled paper without using chlorine, bleach, or other toxic chemicals) in New York City in the 1990s. The project was led by the NRDC, the organization formed in 1970 specifically to defend the environment by filing lawsuits, particularly against violations of NEPA. Its goal was to “show industry that sustainability was compatible with the pursuit of profit.”
But the project ultimately collapsed due in part to the very forces that NRDC had helped create. The leader of the project, Allen Hershkowitz, found himself spending nearly all his time trying to obtain permits for the project, a situation which the NRDC found ironic:
“We had worked hard for years to create strict rules governing things like clean air and clean water, and now we had to jump through all the same regulatory hoops as someone who wanted to open a hazardous waste dump,” NRDC cofounder John Adams noted in his history of the NRDC.
NRDC also found itself opposed by local clean air activists who objected to the construction of a large factory on the grounds that it would increase neighborhood pollution, a charge NRDC found “baseless” and “especially painful.”
In the end, NRDC was unable to navigate the morass of getting funding, community support, permits, and political approval. The final straw was when the NRDC was sued by the NAB Construction Company for $600 million for breach of contract. The NRDC, which had spent its entire existence filing lawsuits to prevent environmentally harmful behavior, now found itself on the other end of a lawsuit, and the project was canceled. In response to the ordeal, the NRDC reflected that “the law alone was better at stopping bad things from happening than at making good things happen.”