The AI conversation is full of abstract nouns: "society," "the tech industry," "governments," "the public." This guide replaces abstractions with specific actors, specific forms of leverage, and specific tensions between them. Power over AI development is not evenly distributed — and knowing who holds it, and how, matters.
The Frontier Labs
A small number of organisations currently have the capability to train frontier AI models — the most powerful systems that set the pace for the field. This is not because AI research is secret; it's because training frontier models requires extraordinary concentrations of compute, data, and engineering talent.
The most-used AI system globally through ChatGPT. Transformed from a nonprofit to a complex "capped-profit" structure with Microsoft as dominant investor. Its decisions about what GPT-4 and GPT-5 can and cannot do reach more users than any other AI lab.
Founded by former OpenAI researchers with explicit safety focus. Heavily funded by Amazon and Google. Its safety-first framing shapes how alignment research is discussed globally, even by those who disagree with its approach.
The result of merging Google Brain and DeepMind. Has unparalleled access to data through Google's products and its own compute via TPUs. Integration with Search means its AI decisions affect how billions of people find information.
Has taken a different strategic bet: releasing its models openly (Llama series) rather than keeping them proprietary. This shapes what the global research community builds on and makes AI capabilities widely accessible — including to those outside the governance frameworks.
The Compute Gatekeepers
Building frontier AI requires more computing power than any single country or company could independently produce. This concentrates a different kind of power in the hands of a very small number of actors: those who design and manufacture the chips that AI runs on.
The company whose GPU chips power almost every major AI model being trained today. NVIDIA's export policies — shaped by US government restrictions — determine which countries and organisations can access frontier compute. This makes NVIDIA an involuntary but powerful instrument of AI geopolitics.
Taiwan Semiconductor Manufacturing Company makes the physical chips that the entire AI industry runs on. Located in Taiwan. This geographic concentration creates geopolitical risk that governments increasingly treat as a strategic vulnerability — and a strategic asset.
Because AI capability is partly a function of compute, controlling access to chips is equivalent to controlling access to AI capability. US export controls on NVIDIA chips to China are an explicit attempt to use this chokepoint. This is the most concrete, legible form of AI governance currently in operation — and also the most fragile, since it relies on maintaining a technological lead that may not be permanent.
Governments and Regulatory Bodies
Governments have the formal authority to regulate AI — but most lack the technical capacity to do so effectively, and the ones with the technical capacity (the US, China) have strong incentives to favour their own national champions.
Has passed the world's most comprehensive AI regulation through the AI Act. The "Brussels Effect" — where EU standards de facto become global standards because companies find it easier to comply once — may extend to AI. But the EU's lack of its own frontier labs means it regulates technology it doesn't build.
Hosts most frontier AI labs but has passed very limited federal legislation. Relies primarily on voluntary commitments and executive orders. The US political system's difficulty passing technical legislation means the labs operate with significant autonomy in their home market.
Has both the resources and the political will to direct AI development at a national scale. Treats AI as a strategic technology with explicit national security dimensions. Chinese AI development is less transparent but no less ambitious than Western counterparts.
Britain has positioned itself as a coordinator of international AI safety conversation — hosting the first global AI Safety Summit at Bletchley Park in 2023. The AI Safety Institute is a model that other governments have started to replicate. Soft power, but consequential.
The Open-Source Ecosystem
Not all AI power is concentrated. The open-source AI community — researchers, developers, and organisations who publish models, tools, and datasets freely — represents a genuinely distributed counterweight to the frontier labs. It also represents a genuine safety challenge: open-source models can be accessed by anyone, including those with harmful intent.
Civil Society and the Research Community
Sitting outside the lab–government–investor triangle is a diverse ecosystem of AI safety researchers, ethicists, journalists, advocacy organisations, and academics who have no formal authority but considerable epistemic influence. Groups like the Center for AI Safety, the AI Now Institute, and the Future of Life Institute help define what problems are worth taking seriously and what language is used to discuss them.
This community is internally fragmented — with serious disagreements about timelines, risk models, and priorities — but it plays an important role as a pressure mechanism on labs and governments alike. When researchers raise alarms publicly, it creates accountability that voluntary commitments and investor interests alone would not produce.
The organisations best positioned to build safe AI are the same ones with the most competitive incentive not to slow down. The organisations best positioned to govern AI lack the technical expertise to do so effectively. And the organisations with deep technical expertise and genuine safety commitment — civil society researchers — have almost no formal authority. Power, competence, and accountability are distributed across three different groups with different incentives. This is the governance problem in its most basic form.