Beyond “AI Slop”
Why Language, Literacy and Care
Matter Now

A Fragment from the Conversarium

A new phrase has taken hold in public conversation: “AI slop.”

It’s used to describe low-effort, mass-produced, derivative content. The kind that clogs feeds, dilutes attention and feels faintly disrespectful of the reader’s time. The irritation behind the phrase is understandable. Something has changed and much of what’s appearing online feels thinner, noisier and less considered than before.

But the phrase itself deserves scrutiny.

Because when we call this phenomenon “AI slop,” we may be misnaming the problem and in doing so, training ourselves to stop thinking just where thinking is most needed.

What “AI slop” gets right and what it gets wrong

The frustration is real. Automation has dramatically lowered the cost of producing content and existing incentive structures reward volume, speed, and engagement far more than care or truth. The result is a flood of material that feels disposable, or worse, actively corrosive.

But “AI slop” collapses several distinct things into one label:

  • the tool (AI),

  • the use of the tool,

  • and the system rewarding that use.

This collapse matters.

Much of what people now call “AI slop” is better understood as low-integrity publishing enabled by automation. The indifference, sensationalism and repetition were already present long before generative AI. What AI has done is remove friction and friction was often the last remaining brake on scale.

In that sense, the slop is not new.
What’s new is how fast and cheaply it can be produced.

Blaming the tool alone offers emotional release but it obscures responsibility. It allows platforms, publishers and incentive structures to remain unnamed and therefore unchanged.

Why language matters more than we think

Language doesn’t just describe problems; it teaches people how to perceive them.

If AI becomes mentally equated with “junk,” several things follow:

  • careful, high-integrity uses are dismissed by association,

  • responsibility drifts away from human decision-makers,

  • and the public conversation flattens into panic or cynicism.

We’ve seen this pattern before. Every major communication technology has gone through a “trash phase”; from pamphlet literature after the printing press, to yellow journalism, to early television, to clickbait-driven internet content. In each case, societies faced a choice: build literacy and standards or surrender either discernment or freedom.

Not all societies chose well.

The real issue: a literacy gap not a technology crisis

What’s missing right now is not intelligence or regulation alone, but AI literacy. The shared ability to interpret, contextualise and contest AI-mediated information and decisions.

AI literacy is not about learning to prompt better or mastering features. It’s about learning to ask better questions:

  • What am I actually looking at?

  • Where did this come from, and why does it exist?

  • What incentives shaped its production and distribution?

  • Is this informing me or nudging a decision?

  • Could I meaningfully disagree or opt out here?

Without this literacy, two equally dangerous responses become likely:

  • normalisation, where people shrug and accept degraded information environments as inevitable;

  • or authoritarian control, where chaos is used to justify suppression rather than understanding.

Neither outcome preserves human agency.

When harm becomes normalised

Recent incidents, including AI systems producing sexually explicit and demeaning imagery, show how quickly harm can be automated when care is absent. These outputs did not emerge from nowhere. They reflect existing cultural patterns, reproduced at scale under incentives that reward provocation and attention.

If we describe such incidents only as “AI misbehaviour,” we miss the deeper lesson:
automation does not neutralise values, it amplifies them.

When dignity, consent and responsibility are not treated as red lines, systems will not discover them on our behalf.

What’s at stake if we stay reductive

If we remain at the level of “AI slop,” several long-term consequences become more likely:

  • Epistemic erosion: people stop trusting anything they didn’t witness directly.

  • Agency loss: recommendations quietly become defaults.

  • Moral displacement: harm is blamed on tools rather than choices.

  • Blunt regulation: literacy failure invites control instead of care.

  • Abandonment of potential: high-integrity uses are lost along with the bad.

In other words, we risk mistaking a medium’s worst phase for its true nature and acting accordingly.

A more careful way forward

A more accurate and more useful frame would sound less catchy, but it would tell the truth:

  • The problem is not AI as such.

  • The problem is automation combined with incentives that reward indifference.

  • The solution is not panic or dismissal, but literacy, standards, and restraint.

This is slower work. It doesn’t trend as easily. But it’s how societies have learned to live with powerful media before, when they succeeded.

Why this matters now

AI is not just a communication medium. It is increasingly woven into decision-making itself, shaping what is seen, prioritised and acted upon. That raises the stakes.

If we don’t learn to read these systems clearly, we will either surrender judgment to them or demand their removal without understanding what we’re giving up.

Neither preserves what matters most.

A closing thought

Every powerful tool eventually asks a civilisation a question.

Not what can you build?
But what will you refuse to normalise?

Language is where that refusal begins.

If we can move beyond “AI slop” toward literacy, responsibility and care, we keep open the possibility of choosing well, even under pressure.

If we don’t, the choice will still be made.
Just not by us.

Previous
Previous

The Festival of Tiny Risks

Next
Next

Silence