These are not predictions. They are scenarios — coherent, internally consistent futures that follow from real conditions in the world today. The goal isn't to be right about which one happens. It's to think clearly about what each one would actually feel like from the inside.
Managed Progress
The labs keep moving, but slower than the optimists hoped and faster than the pessimists feared. Regulation arrives unevenly — the EU's AI Act grows teeth, the US passes a patchwork of sector-specific rules, China maintains its own parallel standards. Labs comply in the jurisdictions where they must and route around requirements elsewhere.
AI capabilities continue to improve, but the improvements feel incremental to most people — better assistants, better tools, occasional surprises. Automation accelerates in back-office work, some legal and medical tasks, logistics. Labor markets adjust awkwardly, with significant disruption concentrated in specific sectors and geographies while others remain largely untouched.
- Governance becomes a bureaucratic rather than existential problem — still important, but negotiable.
- The risks are real but diffuse: inequality, labor displacement, surveillance, misinformation at scale.
- The window for structural governance decisions closes quietly — not with a crisis, but with normalisation.
The Race Accelerates
Competitive pressure — between labs, between nations — overrides caution at every decision point. Each time a lab considers slowing down, a competitor's announcement makes restraint feel suicidal. Governments frame AI as a strategic asset, equivalent to nuclear capability in the Cold War, and fund development as a matter of national survival.
Capabilities advance faster than governance. Safety research exists but is chronically underfunded relative to capability research. A series of incidents — none catastrophic, but each more significant than the last — creates a cycle of public alarm followed by reassurance followed by the next incident.
- The Collingridge dilemma — it's hard to change a technology once it's deployed — plays out in real time.
- Democratic governance is structurally slow; capability races are structurally fast. This is a problem.
- Individual labs making safety commitments cannot solve a coordination problem — that requires institutions.
The Plateau
Scaling hits genuine limits — not just the limits that critics have predicted for years and been wrong about, but real diminishing returns that manifest across multiple labs simultaneously. Data constraints, compute efficiency plateaus, and a growing recognition that current architectures may not be the path to more capable systems all arrive around the same time.
Progress slows. Hype collapses. The AI investment bubble deflates. Most general-purpose AI products that were never quite good enough for their use cases are quietly discontinued. The technology that remains is genuinely useful but more narrow than the 2024 vision: excellent code assistants, strong document summarization, reliable specialized agents in constrained domains.
- Not all serious risks disappear — even narrow AI creates real harms at scale.
- Governance frameworks built for existential risk may be poorly suited for managing pervasive, mundane AI harm.
- The scenario most likely to produce complacency — "see, it was fine" — even if it wasn't entirely fine.
The Discontinuity
Something unexpected happens — not necessarily a single dramatic event, but a capability jump that arrives faster and more significantly than the smoothly extrapolated curve predicted. Perhaps a new architectural approach, perhaps an emergent property at scale that wasn't anticipated, perhaps an interaction between AI systems and scientific research that accelerates both simultaneously.
The distinctive feature of this scenario is not that everything goes wrong. It's that things move faster than institutions can process. The gap between what exists and what governance was designed to handle widens suddenly rather than gradually.
- Speed matters as much as trajectory — even good outcomes can be destabilizing if they arrive too fast.
- The Petrov problem: decision-makers will face moments requiring correct judgment under radical uncertainty.
- This is why alignment research, interpretability work, and governance infrastructure matter now, before they're needed urgently.