Emerging Technology

What does America think the Trump Administration should do about AI? 

Analyzing public recommendations to the Office of Science and Technology Policy’s (OSTP) AI Action Plan
April 29th 2025

Progress in emerging technology happens fast, but policy moves very slowly. Formal processes for policymaking (reading bill text, reviewing existing regulations, aggregating and analyzing public feedback) are often highly manual. This takes time away from the most important parts of policymaking: figuring out which ideas to prioritize and implementing them.

At IFP, we’re experimenting with tools to help solve this problem. Today, we’re launching aiactionplan.org – a tool for analyzing recommendations made to the Trump administration on AI policy to help speed up the discovery and prioritization of good AI policy ideas.

See the AI Action Plan Database: www.aiactionplan.org

Where are these recommendations coming from? In January 2025, President Trump tasked the Office of Science and Technology Policy (OSTP) with developing an AI Action Plan to promote American leadership in AI. OSTP requested input from the public and received 10,068 submissions. Last week, these submissions were made public. We used AI to extract recommendations from all of these and then created a searchable database.

We hope the database will serve as a valuable tool for researchers and policymakers to discover and prioritize AI policy ideas. Here, we offer a high-level analysis of key themes and ideas within the recommendations.

Who made recommendations? 

Around 93% of the submissions (9,313 out of 10,068) were from individuals. Of the individual submissions, around 40% were anonymous or used an obvious pseudonym. Over 90% of the individual submissions were extremely short, non-substantive, and had negative sentiment.


Around 95% of individual submissions related to AI and copyright, with many focusing on impacts on artists.


Of all the submissions, we identified 721 that were most likely to contain substantive policy ideas, i.e., those submitted by an organization. We categorized organizations into ten types: 

We then extracted specific recommendations from each submission, totaling 4,784 recommendations:

What topics were covered?

We clustered recommendations to extract 20 core topics. Each recommendation was then tagged with one or more of these topics. The most popular topics for recommendations were “standards & regulation,” “infrastructure,” and “data & IP.”

Topic coverage varied moderately across different kinds of organizations. Compared to other organizations:

  • Think tanks tended to focus on security and export controls. 
  • Academia tended to focus on basic science and infrastructure.
  • Advocacy organizations tended to focus on ethics, civil rights, data, and IP.
  • Professional societies tended to focus on standards, regulations, and healthcare.
  • Industry associations tended to focus on infrastructure and deregulation.
  • Frontier AI developers tended to focus on global engagement, infrastructure, and deregulation, while avoiding topics such as ethics, civil rights, and evidence of risks.
  • Other AI companies tended to focus on government procurement and the market impacts of the technology.

Which parts of the government did the recommendations focus on? 

We also extracted information about the specific office or agency within the government that each recommendation was focused on (“assignee”). 

Overall, the Department of Commerce continues to be seen as the center of action for AI policy. At the agency level, around 12% of the recommendations were focused on the Department of Commerce. Other popular assignees for recommendations included the Executive Office of the President (10% of recommendations that mentioned a specific assignee), the National Science Foundation (7%), and the Department of Defense (6%).

Where more specific offices were mentioned in the recommendations, NIST was the most popular; where a recommendation had an assignee, it was NIST about 6% of the time. When including the AI Safety Institute (AISI) within NIST, the number increases to 8%. The next most popular assignees were the Office of Science and Technology Policy (OSTP), the Food and Drug Administration (FDA), and the Bureau of Industry and Security (BIS), which were mentioned in 5.5%, 2%, and 1% of recommendations, respectively.

Recommendations on specific topics

Within each topic, we clustered recommendations that called for fundamentally similar actions. Here, we provide an overview of these clusters within popular topics that overlap with our interest areas at IFP. 

Basic science

There were 462 recommendations related to basic science. These focused on a few key themes: 

  • Increasing federal funding for fundamental AI research through NSF, DOE, DARPA, and NIH.
  • Provide compute access through the National AI Research Resource (NAIRR) to assist with the application of AI to scientific discovery (e.g., biology, materials science, climate).
  • Establishing interdisciplinary research centers and innovation hubs for AI R&D.
  • Allocating defense funding at DARPA/IARPA to accelerate methods for improving the robustness, reliability, explainability, and security of AI models and hardware. 
  • Supporting public research into AI evaluation science through grant-making bodies like the NSF.

Infrastructure

There were 1,129 recommendations related to infrastructure, which spanned power generation and transmission, semiconductor manufacturing, data centers, robotics, and broad permitting reform. Key themes were:

  • Establishing and funding the NAIRR to provide AI compute access to academic institutions and startups. 
  • Streamlining the permitting processes for AI energy infrastructure development through authorities like the DPA.
  • Leasing federal land for data center and associated energy infrastructure development.
  • Broader reform to the design and implementation of the National Environmental Policy Act (NEPA) to narrow the set of actions that trigger NEPA, expand the use of categorical exclusions, and shorten timelines for environmental reviews. 
  • Accelerating domestic chip production through targeted subsidies, strategic tariffs, and finalizing awards under the CHIPS Act. 
  • Accelerating the expansion of the transmission grid by reforming interconnection processes and using federal authorities to speed up the new construction of high-voltage lines. 

Security

There were 721 recommendations related to security. These tended to focus on security at a few different levels of the tech stack:

  • Funding R&D initiatives to research model weight security, hardware-enabled mechanisms for chips, and zero-trust data center design.
  • Creating comprehensive standards for the physical and cybersecurity safeguards that frontier AI labs should adopt.
  • Using powerful AI models to discover and patch vulnerabilities to ensure that our critical infrastructure is safeguarded from AI-enabled cyberattacks.
  • Establishing a well-funded & authorized “AI Security Institute” to assist with these initiatives
  • Requiring mandatory security red-teaming for advanced AI systems.
  • Setting up creative incentive mechanisms to ensure compliance with security requirements and promote security R&D, such as tying compliance and R&D investments to access to federally subsidized energy for data centers or to export control licenses. 

Evidence of risks

There were 605 recommendations related to collecting evidence about the risks posed by AI models and applications. These recommendations focused on:

  • Creating standardized risk-evaluation suites and requiring testing conducted by frontier model developers and/or third-party auditors. 
  • Setting up a system for continuous monitoring and incident reporting, requiring/allowing labs and AI researchers to disclose security breaches and system malfunctions. 
  • Funding programs to research AI interpretability and explainability.
  • Developing threat models and government responses triggered by different future states of evidence. 
  • Fully funding and empowering a hub of AI experts (such as the AI Safety Institute) to do red-teaming, technical evaluations, and technical standards development. 

Government procurement

There were 476 recommendations related to the government’s procurement of AI. These recommendations focused on:

  • Streamlining the FedRAMP/ATO process for AI technologies to enable faster adoption.
  • Shifting from traditional contracting to outcome-based/performance-based models for AI procurement.
  • Adopting commercial industry-standard terms, conditions, and licensing models for government AI procurement.
  • Developing standardized procurement frameworks and processes for evaluating and testing AI models during acquisition.

Export controls

There were 194 recommendations related to export controls. The majority were about strengthening export controls or making them more effective. These focused on:

  • Providing a substantial budget increase to the Bureau of Industry and Security (BIS) so it can hire sufficient technical talent and deploy new technologies to assist in export control administration and enforcement.
  • Creating new rules focused on gaps in the current set of controls, including new controls on inference chips like the H20/B20 and “Know Your Customer” (KYC) requirements for cloud companies offering large-scale AI computing.
  • Developing and implementing new mechanisms for export control enforcement, such as geolocation features for chips, new software technologies for tracking exports and mitigating smuggling, and intelligence sharing from the intelligence community to BIS.

Many recommendations also emphasized the importance of streamlining export controls and reducing their negative impact on American companies. These focused on:

  • Streamlining the export license process for trusted partners.
  • Reassessing which countries are classified as Tier II and Tier III under the AI Diffusion Framework.
  • Setting up conditional export controls with greater export flexibility tied to heightened security measures.
  • Coordinating with US allies to implement comparable controls.

Open-source

There were 156 recommendations related to open-source AI. These tended to focus on both the importance of encouraging the open-source ecosystem and the potential added risks that some open-source models may pose: 

  • Funding open-source model development.
  • Establishing a national repository for open-source models and datasets.
  • Evaluating the risks of open-source models and defining/encouraging responsible release practices.
  • Allocating compute resources for open-source AI development.

Talent & Education

There were 662 recommendations related to talent & immigration. These addressed a wide range of issues like immigration, job displacement, and K-12 education. Some key themes were:

  • Developing targeted immigration reforms to attract and retain AI talent.
  • Funding AI research and training initiatives at universities.
  • Creating federal AI reskilling and upskilling programs for displaced workers.
  • Establishing comprehensive K-12 AI literacy programs and curriculum integration.
  • Implementing AI talent initiatives for the federal government workforce.