← Bonus Resources / Near-Future Scenarios
Bonus Resource · 02

Near-Future Scenarios: Four Paths AI Could Take

Grounded extrapolations of where we might be by 2030 — not science fiction, but structured speculation based on current trajectories.

These are not predictions. They are scenarios — coherent, internally consistent futures that follow from real conditions in the world today. The goal isn't to be right about which one happens. It's to think clearly about what each one would actually feel like from the inside.

⚖️
Scenario A

Managed Progress

Plausibility
Moderate–High

The labs keep moving, but slower than the optimists hoped and faster than the pessimists feared. Regulation arrives unevenly — the EU's AI Act grows teeth, the US passes a patchwork of sector-specific rules, China maintains its own parallel standards. Labs comply in the jurisdictions where they must and route around requirements elsewhere.

AI capabilities continue to improve, but the improvements feel incremental to most people — better assistants, better tools, occasional surprises. Automation accelerates in back-office work, some legal and medical tasks, logistics. Labor markets adjust awkwardly, with significant disruption concentrated in specific sectors and geographies while others remain largely untouched.

It is 2028. You use an AI assistant every day without thinking much about it, the way people stopped thinking about search engines. Your company's HR department is half the size it was in 2023. A new professional certification — "AI-Supervised Practice" — is required for some medical specialties. The regulatory debate has moved from "should we regulate AI" to "which committee oversees which part."
What this scenario means:
  • Governance becomes a bureaucratic rather than existential problem — still important, but negotiable.
  • The risks are real but diffuse: inequality, labor displacement, surveillance, misinformation at scale.
  • The window for structural governance decisions closes quietly — not with a crisis, but with normalisation.
🔥
Scenario B

The Race Accelerates

Plausibility
Moderate

Competitive pressure — between labs, between nations — overrides caution at every decision point. Each time a lab considers slowing down, a competitor's announcement makes restraint feel suicidal. Governments frame AI as a strategic asset, equivalent to nuclear capability in the Cold War, and fund development as a matter of national survival.

Capabilities advance faster than governance. Safety research exists but is chronically underfunded relative to capability research. A series of incidents — none catastrophic, but each more significant than the last — creates a cycle of public alarm followed by reassurance followed by the next incident.

It is 2028. Three major labs have each announced systems they describe as "approaching general capability." Independent researchers cannot verify this — the systems are closed. A 2027 incident in which an autonomous trading system triggered a 40-minute market freeze is still being investigated. The Senate has held eleven hearings. No legislation has passed.
What this scenario means:
  • The Collingridge dilemma — it's hard to change a technology once it's deployed — plays out in real time.
  • Democratic governance is structurally slow; capability races are structurally fast. This is a problem.
  • Individual labs making safety commitments cannot solve a coordination problem — that requires institutions.
🧊
Scenario C

The Plateau

Plausibility
Lower but real

Scaling hits genuine limits — not just the limits that critics have predicted for years and been wrong about, but real diminishing returns that manifest across multiple labs simultaneously. Data constraints, compute efficiency plateaus, and a growing recognition that current architectures may not be the path to more capable systems all arrive around the same time.

Progress slows. Hype collapses. The AI investment bubble deflates. Most general-purpose AI products that were never quite good enough for their use cases are quietly discontinued. The technology that remains is genuinely useful but more narrow than the 2024 vision: excellent code assistants, strong document summarization, reliable specialized agents in constrained domains.

It is 2029. The term "AGI" has become mildly embarrassing in technical circles, like promising cold fusion. AI tools are everywhere and genuinely useful. The breathless coverage has moved to something else. Three large AI companies have merged or been acquired. A retrospective piece in the Atlantic is titled: "What We Got Wrong About the Intelligence Explosion."
What this scenario means:
  • Not all serious risks disappear — even narrow AI creates real harms at scale.
  • Governance frameworks built for existential risk may be poorly suited for managing pervasive, mundane AI harm.
  • The scenario most likely to produce complacency — "see, it was fine" — even if it wasn't entirely fine.
Scenario D

The Discontinuity

Plausibility
Low but non-negligible

Something unexpected happens — not necessarily a single dramatic event, but a capability jump that arrives faster and more significantly than the smoothly extrapolated curve predicted. Perhaps a new architectural approach, perhaps an emergent property at scale that wasn't anticipated, perhaps an interaction between AI systems and scientific research that accelerates both simultaneously.

The distinctive feature of this scenario is not that everything goes wrong. It's that things move faster than institutions can process. The gap between what exists and what governance was designed to handle widens suddenly rather than gradually.

It is 2027. A system deployed six months ago for drug discovery has, according to its developers, suggested three novel antibiotic mechanisms that are now in accelerated trials. No one is entirely sure how. The same architecture, given access to a different dataset, has also demonstrated something that two independent labs, using different evaluation methods, are reluctant to describe publicly. The UN convenes an emergency session. The session produces a communiqué.
What this scenario means:
  • Speed matters as much as trajectory — even good outcomes can be destabilizing if they arrive too fast.
  • The Petrov problem: decision-makers will face moments requiring correct judgment under radical uncertainty.
  • This is why alignment research, interpretability work, and governance infrastructure matter now, before they're needed urgently.