← Bonus Resources / Who Controls AI? A Power Map
Bonus Resource · 05

Who Controls AI? A Power Map

A plain-language guide to the actual power landscape — who the players are, what levers they pull, and where the real conflicts lie.

The AI conversation is full of abstract nouns: "society," "the tech industry," "governments," "the public." This guide replaces abstractions with specific actors, specific forms of leverage, and specific tensions between them. Power over AI development is not evenly distributed — and knowing who holds it, and how, matters.

The Frontier Labs

A small number of organisations currently have the capability to train frontier AI models — the most powerful systems that set the pace for the field. This is not because AI research is secret; it's because training frontier models requires extraordinary concentrations of compute, data, and engineering talent.

OpenAI
Leverage: Deployment scale, public perception, GPT ecosystem

The most-used AI system globally through ChatGPT. Transformed from a nonprofit to a complex "capped-profit" structure with Microsoft as dominant investor. Its decisions about what GPT-4 and GPT-5 can and cannot do reach more users than any other AI lab.

Anthropic
Leverage: Safety research agenda, constitutional AI methodology

Founded by former OpenAI researchers with explicit safety focus. Heavily funded by Amazon and Google. Its safety-first framing shapes how alignment research is discussed globally, even by those who disagree with its approach.

Meta AI
Leverage: Open-source releases, social media distribution

Has taken a different strategic bet: releasing its models openly (Llama series) rather than keeping them proprietary. This shapes what the global research community builds on and makes AI capabilities widely accessible — including to those outside the governance frameworks.

The key tension at this layer: Labs make decisions that affect billions of people but are accountable primarily to their investors. Their voluntary safety commitments are real, but unenforceable. The gap between "we take safety seriously" and "we are subject to binding external oversight" remains enormous.

The Compute Gatekeepers

Building frontier AI requires more computing power than any single country or company could independently produce. This concentrates a different kind of power in the hands of a very small number of actors: those who design and manufacture the chips that AI runs on.

NVIDIA
Leverage: ~80% of AI training chip market

The company whose GPU chips power almost every major AI model being trained today. NVIDIA's export policies — shaped by US government restrictions — determine which countries and organisations can access frontier compute. This makes NVIDIA an involuntary but powerful instrument of AI geopolitics.

TSMC
Leverage: Manufactures chips for NVIDIA, Apple, AMD, and others

Taiwan Semiconductor Manufacturing Company makes the physical chips that the entire AI industry runs on. Located in Taiwan. This geographic concentration creates geopolitical risk that governments increasingly treat as a strategic vulnerability — and a strategic asset.

The compute chokepoint

Because AI capability is partly a function of compute, controlling access to chips is equivalent to controlling access to AI capability. US export controls on NVIDIA chips to China are an explicit attempt to use this chokepoint. This is the most concrete, legible form of AI governance currently in operation — and also the most fragile, since it relies on maintaining a technological lead that may not be permanent.


Governments and Regulatory Bodies

Governments have the formal authority to regulate AI — but most lack the technical capacity to do so effectively, and the ones with the technical capacity (the US, China) have strong incentives to favour their own national champions.

European Union
Leverage: Market access, regulatory precedent (Brussels Effect)

Has passed the world's most comprehensive AI regulation through the AI Act. The "Brussels Effect" — where EU standards de facto become global standards because companies find it easier to comply once — may extend to AI. But the EU's lack of its own frontier labs means it regulates technology it doesn't build.

United States
Leverage: Home jurisdiction for leading labs, export controls, standards bodies

Hosts most frontier AI labs but has passed very limited federal legislation. Relies primarily on voluntary commitments and executive orders. The US political system's difficulty passing technical legislation means the labs operate with significant autonomy in their home market.

UK / Bletchley Process
Leverage: Convening, norm-setting, AI Safety Institute

Britain has positioned itself as a coordinator of international AI safety conversation — hosting the first global AI Safety Summit at Bletchley Park in 2023. The AI Safety Institute is a model that other governments have started to replicate. Soft power, but consequential.


The Open-Source Ecosystem

Not all AI power is concentrated. The open-source AI community — researchers, developers, and organisations who publish models, tools, and datasets freely — represents a genuinely distributed counterweight to the frontier labs. It also represents a genuine safety challenge: open-source models can be accessed by anyone, including those with harmful intent.

The open-source tension: Open models democratize AI access, enabling researchers and organisations globally who couldn't otherwise compete with frontier labs. They also make it harder to implement safety measures, since anyone can modify the model. There's a real, unresolved debate in the AI safety community about whether open-source AI is net positive or net negative for safety — and no one has fully convinced the other side.

Civil Society and the Research Community

Sitting outside the lab–government–investor triangle is a diverse ecosystem of AI safety researchers, ethicists, journalists, advocacy organisations, and academics who have no formal authority but considerable epistemic influence. Groups like the Center for AI Safety, the AI Now Institute, and the Future of Life Institute help define what problems are worth taking seriously and what language is used to discuss them.

This community is internally fragmented — with serious disagreements about timelines, risk models, and priorities — but it plays an important role as a pressure mechanism on labs and governments alike. When researchers raise alarms publicly, it creates accountability that voluntary commitments and investor interests alone would not produce.

The fundamental structural tension

The organisations best positioned to build safe AI are the same ones with the most competitive incentive not to slow down. The organisations best positioned to govern AI lack the technical expertise to do so effectively. And the organisations with deep technical expertise and genuine safety commitment — civil society researchers — have almost no formal authority. Power, competence, and accountability are distributed across three different groups with different incentives. This is the governance problem in its most basic form.