The Launch Sequence

AI advances over the next few years could catalyze a new golden age of discovery and abundance. To realize this potential, we must solve two broad problems. First, these benefits may not come by default or quickly enough, given existing commercial incentives. Second, rapidly improving AI capabilities will likely come with risks that industry isn’t sufficiently incentivized to solve.

The Launch Sequence is a collection of essays, written by expert authors, describing concrete but ambitious AI projects to accelerate progress in science and security.

You can start by reading our introductory essay, or scroll down to view the whole collection.

Emerging Technology
Preparing for Launch
An introduction to The Launch Sequence: Why shaping AI progress matters, and how to go about it
Read the full report
August 11th 2025
  |  

AI for Science

Despite a recent wave of new public, private, and nonprofit projects focused on AI for science, we are still not close to fully harnessing new AI capabilities to accelerate solutions to the world’s most important problems.

Furthermore, too little effort is focused on solving the numerous structural barriers to realizing the benefits of AI-enabled scientific discovery. The deployment of new clean energy technologies will likely face a near-endless series of vetoes at the hands of conservation groups. New drugs will likely be stalled for years in the FDA’s needlessly onerous approval process. Thanks to a mix of poor incentives and antiquated government computer systems, many useful scientific datasets aren’t accessible for AI training. And public research is hampered by a broken funding model, in which principal investigators spend almost half their time on grant-related paperwork.

The proposals here cover both object-level projects in AI for science, and projects aimed at re-structuring how science works to take advantage of AI.

Emerging Technology
Benchmarking for Breakthroughs
How to incentivize AI for national priorities through a strategic challenge and evaluations program
Read the full report
August 8th 2025
  |  
Emerging Technology
Using X-Labs to Unleash AI-Driven Scientific Breakthroughs
How to adapt our science funding mechanisms to the unique infrastructure needs of large-scale AI projects
Read the full report
August 8th 2025
  |  
Emerging Technology
The Replication Engine
How to build automated replication infrastructure for better, faster science
Read the full report
August 8th 2025
  |  
Emerging Technology
Teaching AI How Science Actually Works
How block-grant labs can generate the real-world data AI needs to do science
Read the full report
August 8th 2025
  |  
Emerging Technology
A Million-Peptide Database to Defeat Antibiotic Resistance
How to build a large peptides database to train the AlphaFold for new antibiotics
Read the full report
August 8th 2025
  |  
Emerging Technology
Biotech’s Lost Archive
How to fuel AI by unlocking the FDA’s knowledge of biotech failures
Read the full report
August 8th 2025
  |  
Emerging Technology
Scaling Materials Discovery with Self-Driving Labs
How to close the gap between AI-guided material design and real-world validation
Read the full report
August 8th 2025
  |  
Emerging Technology
Mapping the Brain for Alignment
How to map the mammalian brain’s connectome to solve fundamental problems in neuroscience, psychology, and AI robustness
Read the full report
August 8th 2025
  |  

Calls to Discovery

Statements from leaders across government, industry, and civil society about the importance of launching ambitious AI projects to advance science and security

“AI stands to rapidly accelerate the rate of scientific progress. It is extremely important that the United States act now to establish and maintain a lead in AI-enabled scientific discovery.”

Sam Rodriques
Co-Founder and CEO of FutureHouse

“Now more than ever, we need to unleash a new era of scientific discovery that stabilizes our grid, cures diseases, and keeps our country safe. The American Science Acceleration Project (ASAP) seeks to do just that by leveraging AI and other technologies to accelerate scientific progress by 10 times by 2030, using our best and brightest minds from across the country to make that happen. I am grateful that the Institute for Progress has coordinated the development of AI Moonshot proposals to help guide ASAP’s efforts.”

Senator Martin Heinrich
D-NM

“The guardrails that we put in place for state-of-the-art AI models are essential, and that science is imperfect at best. The national security ecosystem 100% needs to lean into new AI technology, but part of leaning into that technology is figuring out how to make those guardrails safe and trustworthy, and that’s an area where the research must go much faster than it has been.”

Kathleen Fisher
Director of DARPA’s Information Innovation Office (I2O)

“AI holds enormous potential to speed the pace of scientific breakthroughs. The United States should remain at the frontier of scientific discovery, ensuring that AI enables new advances while managing the risks that attend ever more powerful AI systems. This collection usefully articulates specific, concrete ways to do just that.”

Richard Fontaine
CEO of the Center for a New American Security (CNAS)

“AI is the key to unlock a new era of scientific discovery—where we don’t just accelerate progress, we redefine what’s possible. For the United States, it will empower us to tackle our biggest challenges with speed, precision, and imagination. The Launch Project will provide us that imagination”

Gerry Petrella
General Manager of US Public Policy at Microsoft, former Policy Director for Sen. Chuck Schumer

“AI represents the most significant force multiplier for scientific research in generations. The key is channeling this power toward solving our most pressing challenges while building the robust institutions and safeguards needed to ensure American leadership remains both effective and responsible.”

Brad Carson
President and Co-Founder of Americans for Responsible Innovation, former Congressman (D-OK)

“Maintaining U.S. leadership in AI will require out-innovating our competitors, not just at the level of the technology itself, but also in its most promising downstream applications. To that end, the Launch Sequence offers a collection of concrete yet ambitious proposals to unlock the transformative potential of AI for scientific discovery and national security. Combined, these proposals offer a compelling agenda for securing AI’s enormous potential upside while safeguarding against emerging threats, from autonomous labs for discovering new materials, to scalable methods for patching vulnerabilities in our critical infrastructure.”

Sam Hammond
Chief Economist at the Foundation for American Innovation

“In 2019, AI models could barely babble. Six years and 100,000x more compute later, we have systems that assist software engineers, answer STEM questions, and solve complex math problems. Remarkably, we now have a clear roadmap for AI to revolutionize science and the economy within the next decade—a transformation every nation should prepare for.”

Jaime Sevilla
Director of Epoch AI

“AI offers vast potential not just to accelerate but to utterly transform science in ways we cannot yet imagine. Either we harness this potential to American advantage or we risk seeing it accrue to someone else’s. Yet the AI‑science revolution won’t happen on autopilot—we need concrete policy actions that set the right incentives and safeguards, and to ensure that it accelerates real progress.”

Daniel Correa
CEO of the Federation of American Scientists

“We already know people will use the full power of AI to make really great cat videos. The question of whether we will fully exploit AI to advance living standards by obliterating the obstacles to rapid scientific and technological progress, however, remains open.”

Eli Dourado
Head of Strategic Investments at the Astera Institute, former Chief Economist at the Abundance Institute

“AI is bringing about a new era of scientific progress, helping address critical challenges from treating diseases to developing new materials to generate, store and transform energy. We must urgently invest in a healthy AI ecosystem to harness these opportunities while carefully navigating potential risks.”

Pushmeet Kohli
Vice President of Science and Strategic Initiatives, Google DeepMind

“AI has the potential to supercharge scientific inquiry, but this isn’t just about scientific progress. Getting AI for science right means ensuring AI breakthroughs happen in America, benefit American workers, and secure American technological dominance for the next generation.”

Abigail Ball
Executive Director of American Compass

“As the pace of technological progress continues to accelerate, it’s more important than ever for American science to meet that moment with creativity and ambition. From super-charging our understanding of the building blocks of human life to driving innovation in the very ways we approach the work of science itself, The Launch Sequence brings an important and refreshing sense of optimism and possibility to an AI-accelerated world.”

Erwin Gianchandani
Assistant Director for Technology, Innovation, and Partnerships at the National Science Foundation (NSF)

“The combination of AI reasoning models and cloud-enabled, automated laboratories will enable new approaches to scientific discovery. The US should lead in exploring this new frontier as there will be permanent gains to the first mover.”

Jason Kelly
Co-founder and CEO of Gingko Bioworks, former Chair of US National Security Commission on Emerging Biotechnology

“AI stands to rapidly accelerate the rate of scientific progress. It is extremely important that the United States act now to establish and maintain a lead in AI-enabled scientific discovery.”

Sam Rodriques
Co-Founder and CEO of FutureHouse

“Now more than ever, we need to unleash a new era of scientific discovery that stabilizes our grid, cures diseases, and keeps our country safe. The American Science Acceleration Project (ASAP) seeks to do just that by leveraging AI and other technologies to accelerate scientific progress by 10 times by 2030, using our best and brightest minds from across the country to make that happen. I am grateful that the Institute for Progress has coordinated the development of AI Moonshot proposals to help guide ASAP’s efforts.”

Senator Martin Heinrich
D-NM

“The guardrails that we put in place for state-of-the-art AI models are essential, and that science is imperfect at best. The national security ecosystem 100% needs to lean into new AI technology, but part of leaning into that technology is figuring out how to make those guardrails safe and trustworthy, and that’s an area where the research must go much faster than it has been.”

Kathleen Fisher
Director of DARPA’s Information Innovation Office (I2O)

“AI holds enormous potential to speed the pace of scientific breakthroughs. The United States should remain at the frontier of scientific discovery, ensuring that AI enables new advances while managing the risks that attend ever more powerful AI systems. This collection usefully articulates specific, concrete ways to do just that.”

Richard Fontaine
CEO of the Center for a New American Security (CNAS)

“AI is the key to unlock a new era of scientific discovery—where we don’t just accelerate progress, we redefine what’s possible. For the United States, it will empower us to tackle our biggest challenges with speed, precision, and imagination. The Launch Project will provide us that imagination”

Gerry Petrella
General Manager of US Public Policy at Microsoft, former Policy Director for Sen. Chuck Schumer

“AI represents the most significant force multiplier for scientific research in generations. The key is channeling this power toward solving our most pressing challenges while building the robust institutions and safeguards needed to ensure American leadership remains both effective and responsible.”

Brad Carson
President and Co-Founder of Americans for Responsible Innovation, former Congressman (D-OK)

“Maintaining U.S. leadership in AI will require out-innovating our competitors, not just at level of the technology itself, but also in its most promising downstream applications. To that end, the Launch Sequence offers a collection of concrete yet ambitious proposals to unlock the transformative potential of AI for scientific discovery and national security. Combined, these proposals offer a compelling agenda for securing AI’s enormous potential upside while safeguarding against emerging threats, from autonomous labs for discovering new materials, to scalable methods for patching vulnerabilities in our critical infrastructure.”

Sam Hammond
Chief Economist at the Foundation for American Innovation

“In 2019, AI models could barely babble. Six years and 100,000x more compute later, we have systems that assist software engineers, answer STEM questions, and solve complex math problems. Remarkably, we now have a clear roadmap for AI to revolutionize science and the economy within the next decade—a transformation every nation should prepare for.”

Jaime Sevilla
Director of Epoch AI

“AI offers vast potential not just to accelerate but to utterly transform science in ways we cannot yet imagine. Either we harness this potential to American advantage or we risk seeing it accrue to someone else’s. Yet the AI‑science revolution won’t happen on autopilot—we need concrete policy actions that set the right incentives and safeguards, and to ensure that it accelerates real progress.”

Daniel Correa
CEO of the Federation of American Scientists

“We already know people will use the full power of AI to make really great cat videos. The question of whether we will fully exploit AI to advance living standards by obliterating the obstacles to rapid scientific and technological progress, however, remains open.”

Eli Dourado
Head of Strategic Investments at the Astera Institute, former Chief Economist at the Abundance Institute

“AI is bringing about a new era of scientific progress, helping address critical challenges from treating diseases to developing new materials to generate, store and transform energy. We must urgently invest in a healthy AI ecosystem to harness these opportunities while carefully navigating potential risks.”

Pushmeet Kohli
Vice President of Science and Strategic Initiatives, Google DeepMind

“AI has the potential to supercharge scientific inquiry, but this isn’t just about scientific progress. Getting AI for science right means ensuring AI breakthroughs happen in America, benefit American workers, and secure American technological dominance for the next generation.”

Abigail Ball
Executive Director of American Compass

“As the pace of technological progress continues to accelerate, it’s more important than ever for American science to meet that moment with creativity and ambition. From super-charging our understanding of the building blocks of human life to driving innovation in the very ways we approach the work of science itself, The Launch Sequence brings an important and refreshing sense of optimism and possibility to an AI-accelerated world.”

Erwin Gianchandani
Assistant Director for Technology, Innovation, and Partnerships at the National Science Foundation (NSF)

“The combination of AI reasoning models and cloud-enabled, automated laboratories will enable new approaches to scientific discovery. The US should lead in exploring this new frontier as there will be permanent gains to the first mover.”

Jason Kelly
Co-founder and CEO of Gingko Bioworks, former Chair of US National Security Commission on Emerging Biotechnology

AI for Security

AI progress may bring risks that industry is poorly incentivized to solve. Advanced coding agents used throughout the economy to vastly increase productivity could also be put to work, day and night, to find and exploit security vulnerabilities in critical infrastructure. AI systems that can broadly accelerate the pace of medical research could also help engineer biological weapons. Leading AI labs have some incentives to prevent the misuse of their models, but the offense-defense balance of emerging AI capabilities in areas like cyber and bio is uncertain. There’s no iron law of computer science or economics that says defensive capabilities will grow in tandem with offensive capabilities. In the worst case, private incentives to adequately invest in preventing misuse could be dwarfed by the scale of the risks new AI technologies impose on the public.

Proposals from the AI safety community often attract criticism for focusing on solutions that rely on brittle, top-down control, such as a licensing regime for all models above a threshold of training compute. But despite the validity of these critiques, the problem still remains: AI misuse and misalignment could well cause real harm in the near future, and technical research aimed at solving these problems remains a niche field. Moreover, thanks partly to an instinct towards nonproliferation, AI safety researchers have devoted insufficient attention to solutions that assume that dangerous AI capabilities will rapidly diffuse. In the face of superintelligence, both widely available and too cheap to meter, too few projects wield AI to build technologies that asymmetrically benefit defense over offense.

The proposals here are aimed at accelerating the defensive and verification technologies we need to safely transition to a world of ubiquitous advanced AI.

Emerging Technology
Scaling Pathogen Detection with Metagenomics
How to generate the data necessary to reliably detect new pathogen outbreaks with AI
Read the full report
August 8th 2025
  |  
Emerging Technology
Operation Patchlight
How to leverage advanced AI to give defenders an asymmetric advantage in cybersecurity
Read the full report
August 8th 2025
  |  
Emerging Technology
The Great Refactor
How to secure critical open-source code against memory safety exploits by automating code hardening at scale
Read the full report
August 8th 2025
  |  
Emerging Technology
A Sprint Toward Security Level 5
How to protect American AI from nation-state level threats
Read the full report
August 8th 2025
  |  
Emerging Technology
The Infinity Project
How to use AI and mathematics to prove and improve science and security
Read the full report
August 8th 2025
  |  
Emerging Technology
Preventing AI Sleeper Agents
How to ensure American AI models are robust and reliable via a DOD-led red- and blue-teaming effort
Read the full report
August 3rd 2025
  |  
Emerging Technology
Faster AI Diffusion Through Hardware-Based Verification
How to use privacy-preserving verification in the AI hardware stack to build trust and limit misuse
Read the full report
August 8th 2025
  |