We are in the middle of the story. The events listed here aren't historical curiosities — they're active forces shaping decisions being made right now by labs, governments, and individuals. Understanding them is part of understanding what comes next.
ChatGPT and the Mainstreaming of AI
ChatGPT reached 100 million users in two months — the fastest-adopted consumer technology in history. For the first time, frontier AI capability was accessible to anyone with an internet connection, requiring no technical knowledge to use. What had been an obscure research domain became a mainstream conversation.
The speed and scale of adoption outpaced any preparation — by governments, by schools, by employers, by society. Institutions that had years to figure out how to respond found themselves responding in real time to something already embedded in daily life.
Every governance framework, every educational policy, every workforce planning exercise now has to be built around systems that are already in use at scale. The usual sequence — understand the technology, debate the policy, implement the regulation — was disrupted. Policy is chasing deployment, not the other way around.
The EU AI Act Passes
After several years of drafting and negotiation, the European Union passed the world's first comprehensive AI regulation. The Act takes a risk-tiered approach: systems that pose minimal risk face few requirements; systems deemed high-risk — in healthcare, education, law enforcement, critical infrastructure — face strict obligations around transparency, data governance, and human oversight.
Foundational model providers (the large language models) face obligations of their own: transparency about training data, capability evaluations, and reporting of serious incidents. The Act will phase in across 2024–2027.
This is the first binding legal framework for frontier AI, and it sets a precedent others will adapt. The Brussels Effect — where EU standards become global standards because multinational companies find it easier to comply once — may extend AI regulation far beyond Europe's borders. Whether the Act's definitions and risk categories will prove adequate to rapidly advancing capabilities is the open question.
The Bletchley Park AI Safety Summit
The UK government convened the first international summit on AI safety at Bletchley Park — the historic site of Second World War codebreaking, chosen with obvious intentionality. Representatives from 28 countries, including China and the United States, signed the Bletchley Declaration: an acknowledgment that frontier AI poses serious risks and that international cooperation is needed to address them.
The UK also announced the creation of the AI Safety Institute — a government body tasked with evaluating frontier AI systems for safety properties — the first of its kind. The US followed with its own AI Safety Institute shortly after.
It established that AI safety is a legitimate matter of international policy rather than just a technology industry concern. Having China at the table — even symbolically — was significant. The AI Safety Institutes are new governance infrastructure: young, underfunded relative to the labs they're meant to evaluate, but real. Whether they grow into genuinely capable oversight bodies or remain largely ceremonial is one of the key questions of the next five years.
US Chip Export Controls on China Tighten
The Biden administration significantly expanded restrictions on the export of advanced AI chips — primarily NVIDIA's high-end GPUs — to China. The restrictions were designed to prevent Chinese AI development from accessing the compute necessary for training frontier models. China responded by accelerating domestic chip development and finding alternative routes to compute.
This marked the explicit weaponization of semiconductor supply chains as an instrument of AI geopolitics. It was not a subtle move: it amounted to the US stating that advanced AI capability is a strategic national security asset and that it would use export controls to slow a competitor's development.
AI development is now explicitly geopolitical. The chip restrictions mean that the AI race is not purely about research quality — it's about access to physical hardware, semiconductor supply chains, and national industrial policy. The long-term effectiveness of the restrictions is uncertain; China has strong incentives to develop domestic alternatives, and has made progress. But they have already reshaped the strategic calculus of AI development globally.
Agentic AI Enters Real Deployment
AI systems began to be deployed not just as chatbots responding to questions, but as agents — systems that take sequences of actions over time, use tools, browse the web, write and execute code, and pursue goals across multiple steps. This is qualitatively different from question-answering: an agentic system can be given an objective and pursue it with limited human intervention.
Initial deployments were limited and often unreliable. But the trajectory is clear: AI systems are moving from being tools you use to being systems that act on your behalf — and increasingly, on behalf of organizations, at scale.
Agentic AI changes the risk profile substantially. A chatbot that gives bad advice is a problem; an agent that acts on bad advice across hundreds of steps, interacting with real systems, can cause harm at a different scale. The alignment and safety challenges for agentic systems are significantly harder than for question-answering systems — and the governance frameworks are further behind.
DeepSeek and the Open-Source Disruption
Chinese AI lab DeepSeek released models that matched or exceeded the performance of frontier Western models — at a fraction of the reported training cost, and openly available. This disrupted several assumptions: that the US chip export controls were effectively slowing Chinese AI development; that high-performance AI required the extraordinary compute investments that frontier Western labs were making; and that the frontier was a small, closed club.
The DeepSeek releases triggered significant market reactions and forced a reassessment of the economics of AI development.
The story of AI capability is not exclusively the story of a small number of well-resourced Western labs. Efficiency improvements can close performance gaps that compute advantages had opened. And open releases — from any country — make capability widely accessible in ways that bilateral export controls cannot address. The governance challenge is harder than it looked.
AI in Scientific Research Begins to Deliver
AI tools began producing concrete, verifiable contributions to scientific research — not just assisting human scientists but identifying patterns and generating hypotheses that human researchers then pursued. Protein folding (AlphaFold), materials science, drug discovery, and mathematics all saw AI contributions that were independently verified and scientifically significant.
This is different from AI doing tasks humans can also do. This is AI doing things that are difficult or impossible for humans to do at the same speed and scale.
If AI meaningfully accelerates scientific research, the implications reach far beyond the technology sector. Medical breakthroughs, climate solutions, and fundamental science could arrive faster. But faster scientific progress also means faster development of potentially dangerous technologies — biological, chemical, and otherwise. The same capability that finds antibiotic mechanisms can, in principle, assist with the design of harmful agents. This dual-use problem has no easy answer.
The US Regulatory Reversal
The Biden administration had issued executive orders and guidance on AI safety, pushed for voluntary commitments from labs, and worked toward an international governance framework. The Trump administration reversed many of these measures upon taking office in 2025, framing AI regulation as an impediment to competitiveness and prioritizing speed of development over safety constraints.
The US withdrew from or deprioritized several international AI governance initiatives. The stated rationale: American AI dominance is a strategic interest, and safety requirements imposed on American labs while Chinese competitors operate without similar constraints is a competitive disadvantage.
International AI governance requires major powers to cooperate. When the world's most advanced AI nation frames safety constraints as a competitive burden, it weakens the case for similar constraints everywhere — and makes "race to the bottom" dynamics more likely. The EU AI Act and UK AI Safety Institute continue, but without US alignment, the international governance architecture that was beginning to take shape faces serious headwinds. This is the most consequential policy development in AI governance of the past two years.