The Launch Sequence
AI advances over the next few years could catalyze a new golden age of discovery and abundance. To realize this potential, we must solve two broad problems. First, these benefits may not come by default or quickly enough, given existing commercial incentives. Second, rapidly improving AI capabilities will likely come with risks that industry isn’t sufficiently incentivized to solve.
The Launch Sequence is a collection of essays, written by expert authors, describing concrete but ambitious AI projects to accelerate progress in science and security.
You can start by reading our introductory essay, or scroll down to view the whole collection.

AI for Science
Despite a recent wave of new public, private, and nonprofit projects focused on AI for science, we are still not close to fully harnessing new AI capabilities to accelerate solutions to the world’s most important problems.
Furthermore, too little effort is focused on solving the numerous structural barriers to realizing the benefits of AI-enabled scientific discovery. The deployment of new clean energy technologies will likely face a near-endless series of vetoes at the hands of conservation groups. New drugs will likely be stalled for years in the FDA’s needlessly onerous approval process. Thanks to a mix of poor incentives and antiquated government computer systems, many useful scientific datasets aren’t accessible for AI training. And public research is hampered by a broken funding model, in which principal investigators spend almost half their time on grant-related paperwork.
The proposals here cover both object-level projects in AI for science, and projects aimed at re-structuring how science works to take advantage of AI.








AI for Security
AI progress may bring risks that industry is poorly incentivized to solve. Advanced coding agents used throughout the economy to vastly increase productivity could also be put to work, day and night, to find and exploit security vulnerabilities in critical infrastructure. AI systems that can broadly accelerate the pace of medical research could also help engineer biological weapons. Leading AI labs have some incentives to prevent the misuse of their models, but the offense-defense balance of emerging AI capabilities in areas like cyber and bio is uncertain. There’s no iron law of computer science or economics that says defensive capabilities will grow in tandem with offensive capabilities. In the worst case, private incentives to adequately invest in preventing misuse could be dwarfed by the scale of the risks new AI technologies impose on the public.
Proposals from the AI safety community often attract criticism for focusing on solutions that rely on brittle, top-down control, such as a licensing regime for all models above a threshold of training compute. But despite the validity of these critiques, the problem still remains: AI misuse and misalignment could well cause real harm in the near future, and technical research aimed at solving these problems remains a niche field. Moreover, thanks partly to an instinct towards nonproliferation, AI safety researchers have devoted insufficient attention to solutions that assume that dangerous AI capabilities will rapidly diffuse. In the face of superintelligence, both widely available and too cheap to meter, too few projects wield AI to build technologies that asymmetrically benefit defense over offense.
The proposals here are aimed at accelerating the defensive and verification technologies we need to safely transition to a world of ubiquitous advanced AI.






