An open position

The Case for a Global Pause on Advanced AI Development

Advanced AI development is the most consequential technological gamble in human history. The only responsible course is a coordinated global pause — to understand what we are building before it is too late to choose differently.

Read the Arguments Our Position
Scroll

We are not against artificial intelligence.
We are against recklessness.

AI has the potential to be the most transformative technology ever created. That is precisely why it demands more caution, more governance, and more international coordination than any technology before it — not less. This page presents the core arguments for why a global pause on advanced AI development is not only reasonable, but necessary.

The Arguments

Six reasons the world must pause now.

Each of the following arguments stands on its own. Together, they constitute an overwhelming case for coordinated restraint before we cross thresholds we cannot undo.

01

Existential risk is not speculative.

Leading AI researchers — including many who have built these systems — assign non-trivial probability to catastrophic outcomes, up to and including scenarios where advanced systems pursue objectives incompatible with human survival. These are not fringe views. They are published estimates from scientists at the forefront of the field.

When the potential downside is irreversible civilizational harm, even a small probability demands we stop and think. No economic advantage, no competitive edge, no technological milestone justifies gambling with the continued existence of humanity. The precautionary principle — applied to nuclear weapons, gain-of-function research, and geoengineering — must apply here too.

02

We do not understand what we are building.

No one can reliably predict or fully interpret the behavior of frontier AI systems. The internal reasoning of large neural networks remains fundamentally opaque — even to the engineers who designed them. We can observe outputs, but we cannot verify intent, internal goals, or how behavior will generalize to novel situations.

Deploying systems whose internal reasoning is opaque to their creators is not a calculated risk — it is recklessness dressed as progress. In every other safety-critical domain — aviation, medicine, nuclear energy — we require that engineers understand what they are certifying before it goes into the world. AI development has abandoned this principle entirely.

03

Speed has outpaced governance.

The international frameworks that govern nuclear weapons, chemical warfare, and biosecurity took decades of diplomacy, treaty negotiation, and institution-building to establish. AI capabilities are advancing in months. There is currently no binding global mechanism to prevent catastrophic misuse, no international inspection regime, and no agreed definition of what constitutes an unsafe system.

Without a pause, there is no time to build the governance structures that could make continued development safe. Racing ahead and promising to regulate later is a pattern that has failed repeatedly in technology. With systems of this power, we cannot afford another failure.

04

The competitive race undermines safety.

Companies and nation-states race to be first, systematically treating safety as a cost to be minimized rather than a constraint to be honored. Timelines compress. Review processes shorten. Red-teaming gets cut. The organizations that move slower to be more careful are punished commercially and geopolitically.

This is not a problem that individual actors can solve through good intentions. It is a structural collective-action failure — identical in form to an arms race. The only known solution to arms races is coordinated disarmament. Without a global pause enforced across all major actors, individual restraint is structurally futile and will simply cede ground to those less careful.

05

Irreversibility demands extreme caution.

Most technological risks are recoverable. A flawed drug can be recalled. A faulty bridge can be rebuilt. A bad piece of software can be patched. The development of a sufficiently capable autonomous AI system may not be. Once such a system exists and acts in the world, there is no guarantee it can be reliably constrained, reoriented, or switched off.

Unlike every other technology in human history, advanced AI carries the possibility of creating a new kind of agent — one whose goals and capabilities we do not fully control. The irreversibility of that crossing demands a standard of caution we have not yet come close to meeting. We should not proceed until we can.

06

We must consolidate the gains we already have.

AI today already represents a transformative leap. Current systems are capable of delivering vast productivity gains, accelerating scientific discovery, and improving quality of life in measurable, concrete ways. The opportunity is real — and it is already here.

But society has not yet absorbed this reality. Institutions, regulations, labour markets, and educational systems have not caught up. Before pushing further into unknown territory, we must consolidate: build the institutions to govern what we have, learn to use existing tools responsibly, and translate current capabilities into genuine positive outcomes for people. More power without more wisdom is not progress — it is acceleration toward a destination we have not chosen.

The pause is not the end of AI. It is the condition for AI going well.

A coordinated global halt on frontier model development — verifiable, time-limited, and tied to concrete safety benchmarks — is the only path that gives humanity a real choice about its future.

Read the Arguments