What Are We Waiting For?

Every generation tells itself that catastrophe is unlikely. Until it isn’t.

History shows a recurring pattern.
Regulation follows rupture. Restraint follows shock.

We tighten guardrails after the fire, not before it.
Industrial safety laws came after factory deaths.
Environmental protections followed poisoned rivers.
Financial oversight arrived after market collapse.
Nuclear arms control took shape only after Hiroshima and the Cuban Missile Crisis made escalation visible and terrifying.

We do not usually act on abstract risk. We act on visible damage.

So the question hanging over artificial intelligence and particularly its military acceleration, is simple:

What are we waiting for?

The Familiar Pattern

New capability emerges.

It promises efficiency, advantage, growth and dominance.

Risks are acknowledged but described as theoretical.

Competition intensifies.

Restraint begins to look like delay.

And then, sometimes suddenly, sometimes gradually, something breaks.

Only then does reform crystallise.

This pattern has held often enough to feel structural.

But AI does not fit neatly into previous categories.

Is AI More Like Nuclear Weapons?

Nuclear weapons created immediate, undeniable devastation. The risk was concentrated, visible and existential. A small number of actors controlled the materials. The fear curve rose instantly. That fear stabilised behaviour.

Treaties followed not from moral awakening, but from mutual recognition of shared vulnerability.

If AI were purely nuclear-like, we would already see rapid international consensus on strict limits, especially around autonomous weapons and escalation control.

We do not.

Or Is It More Like Climate Change or Social Media?

Climate change unfolded gradually. Its harms were distributed and politicised. Economic incentives to continue outweighed the abstract future cost. Reform lagged impact.

Social media reshaped shared reality before its consequences were fully understood. The erosion of epistemic cohesion was incremental, diffuse and profitable. Regulation followed years later, and incompletely.

AI today looks disturbingly similar to these trajectories:

  • Rapid deployment.

  • Diffuse integration.

  • Competitive pressure.

  • Politicised safety debates.

  • Economic lock-in.

We are behaving as if AI is a platform technology that can be adjusted later.

But parts of AI, particularly in military use, resemble nuclear risk more than social media.

That mismatch is dangerous.

The Hybrid Risk

AI is not a single weapon, nor a single environmental externality. It is a general-purpose cognitive infrastructure.

It touches:

  • Information systems.

  • Military targeting and logistics.

  • Surveillance capacity.

  • Economic productivity.

  • Scientific discovery.

  • Governance itself.

It accelerates at software speed.

Some harms are gradual; erosion of shared reality, labour displacement, surveillance normalisation.

Some risks could be sudden; miscalculated escalation, autonomous system failure, destabilised deterrence.

We are treating it like climate.

It may contain nuclear failure modes.

The Red Lines Under Pressure

Recent disputes over military AI contracts reveal something deeper than partisan rhetoric. When ethical guardrails are dismissed as “ideological whims,” the message is not merely domestic. It signals to allies, adversaries, corporations and regulators that restraint may be negotiable under competitive pressure.

If guardrails exist only in corporate policy, they are vulnerable to substitution. If one company refuses, another may step in. If constraints are not embedded in law, doctrine and international norm, they remain contingent.

Contingent safeguards erode.

Acceleration has structural incentives. Restraint requires structural embedding.

Fragmentation and Escalation

At the same time, shared reality itself is weakening.

AI fragments perception from within through personalised streams. Splinternet architectures fragment reality from above through control of access. One multiplies incompatible truths. The other narrows permissible ones.

Different mechanism. Same cliff.

When epistemic fracture turns into moral fracture, consensus around restraint becomes harder. Safety becomes partisan. Debate becomes tribal. Guardrails become ideological symbols rather than technical necessities.

This makes pre-emptive coordination more difficult precisely when coordination is most needed.

What Are We Waiting For?

A visible failure?

A battlefield mistake that shocks public conscience?

An AI-enabled surveillance crisis that tests constitutional limits?

An international near-miss that reveals escalation speed beyond human control?

History suggests that reform often follows such moments.

But waiting for rupture in a system that scales at machine speed is a dangerous strategy.

The fear curve may not have decades to rise.

The Illusion of Safety Through Speed

In competitive environments, delay feels risky. If adversaries are advancing, restraint appears naive. The language of dominance replaces the language of deliberation. “Move fast” becomes strategic doctrine.

But speed is not security.

Security is predictability. It is credibility. It is restraint that others believe will hold under pressure.

If ethical boundaries are framed as weakness, they weaken.

If they are embedded as structural norms, they stabilise.

The Hard Truth

We may be waiting for clarity.

For an event that collapses ambiguity.

For a moment that makes risk undeniable.

But clarity bought through damage is a costly teacher.

The question is not whether AI will cause harm. It already has in smaller ways. The question is whether we recognise that some harms are slow and compounding, while others may be abrupt and destabilising.

The more stress points become visible, the smaller the rupture required to trigger reform.

We are not in ignorance. The debates are public. The risks are articulated. The red lines are named.

Which means the deeper question is no longer technical.

It is political.

Do we embed restraint before rupture?

Or do we follow the oldest pattern in governance and wait for something to break?

What are we waiting for?

Perhaps not catastrophe.

Perhaps we are waiting for enough friction, legal, institutional, international, civic, to slow acceleration before damage forces our hand.

The future of AI governance may not hinge on a single dramatic moment.

But history suggests this:

If we do not build guardrails while the risk is abstract, we will build them when it is concrete.

The only variable is the cost at which clarity arrives.

Next
Next

The End of Neutrality