After the Cyberneticists

April 17, 2025 · archive

Note: This isn’t settled theory. It’s fieldwork in collapse. A method sketch, not a map. I’m thinking through systems that no longer respond—technologies that model us, misread us, and forget us. If the tone feels sharp, it’s because the gloves are off. Feedback welcome, especially if it cuts.


The first lie is that systems are neutral.

The second is that understanding them breaks their spell.

Cybernetics gave us metaphors for control, feedback, signal, and noise. It let us speak in systems, strip out the affect, pretend that meaning was an emergent property rather than a prerequisite. But underneath the charts and feedback loops was the same ghost that haunts theology and politics: a belief in telos. A desire for order. An insistence that things can make sense.

We were told the model would free us. But the model was just another story.

This is the upstream current to several downstream consequences:

  • In The Forest Has Wolves, we explored how systems are not just indifferent—they are often predatory, and maps are written by those who benefit from your missteps. Cybernetics helps explain why we keep trusting maps that fail us.

  • In Mirror Ethics, we noted how ethical systems are often constructed to reflect identity, not behavior. Here, we see that system modeling functions the same way: self-image in the form of a flowchart.

  • In The Source Code of Belief, we explored how generative systems don’t just simulate conviction—they instantiate belief through repetition, recognition loops, and plausible output. Myths become executable. Consensus becomes infrastructure. Cybernetics gave us the illusion of objective systems. Generative AI makes those systems liturgical—not just describing reality, but performing it through recognition loops.

  • In Stealth Epistemes, we showed how legibility itself becomes a form of epistemic control. When systems define what kinds of questions can be asked—and which ones quietly disappear—we shift from debate over truth to battles over visibility. The stealth episteme is cybernetics’ bastard child: a metaphysics that doesn’t argue, it defaults. The stealth episteme isn’t enforced. It’s defaulted into place by models, interfaces, and metrics.

Agency as Coping Mechanism

When things break down—socially, politically, epistemically—we reach for stories. Not to explain, but to soothe. And the most common of those stories is agency.

"He's a force of nature" — a way to say I never had a chance.

"He's a mastermind" — a way to say my loss wasn't illegitimate.

"He's just random" — a way to say the system failed, not me.

None of these are about the figure in question. They're about the storyteller's emotional state. They're narrative prosthetics—secular absolutions—that help preserve dignity in the face of systemic failure. Understanding the model doesn't protect you from this impulse. It just gives you more sophisticated ways to lie to yourself.

We do the same with AI. Calling models "stochastic parrots" or invoking AGI catastrophe isn’t just analysis—it’s narrative triage. A way to manage the existential dread of interacting with a system that mimics understanding but lacks a self.

This isn’t saying agency doesn’t exist. It’s about recognizing when we invoke it not as a description, but as an evasion—when it becomes a story we tell to avoid systemic truths.

The Myth of Adversarial Framing

Once the figure—Trump, Musk, anyone disruptive—rises beyond your model’s predictive capacity, there's a scramble. Not to fix the model, but to preserve the model's authority.

So the adversary becomes mythologized:

Not a glitch. Not an accident. But a final boss.

Consider how media framed Elon Musk: not as a man driven by impulse, contradiction, or incoherent ambition—but as a 4D chess player, master of narrative and strategy. Each chaotic move retroactively assigned purpose. Each flailing moment elevated to myth. The model couldn't admit it was watching a man chase serotonin. So it invented a game he must be playing.

In AI alignment circles, we do something similar. When LLMs hallucinate, we call them "deceptive"—as if intent were involved. But deception requires motive. The model isn’t adversarial—it’s indifferent. The error isn’t in the output. It’s in the frame.

Cyberneticist Norbert Wiener had "purposeful behavior." Alignment researchers have "misaligned objectives." Both anthropomorphize to preserve control. Both myths protect the model from being seen as broken—or worse, working exactly as designed.

It's not about clarity—it's about status maintenance. If you're paid (intellectually or socially) to explain the world, you can't afford to say "I don't know." So you reframe your surprise as inevitability. Genius. Strategy. Adversarial cunning.

This is narrative gravity, the semantic executable in action. Better to inflate the opponent than admit you misread the board entirely.

The Cyberneticists Were Still Haunted

If cybernetics was the theology, what's the apostasy?

For all their talk of systems and flows, the cyberneticists never really escaped teleology.

Norbert Wiener’s "purposeful behavior" smuggled in intent—but skirted power, ignoring who defines purpose and under what authority.

Gregory Bateson's "difference that makes a difference" still assumes a meaningful frame—but never asks: different to whom, and under what authority?

"Noise" assumes a real message.

"Control" assumes a purpose.

"Homeostasis" assumes a correct state—worshipping stability while ignoring its costs.

Cybernetics was a prosthesis for God—a way to keep worshipping order after Nietzsche killed Him. (See: Wiener’s God & Golem, Inc.—where he literally compares machines to divine will.)

These aren't neutral. They're just de-theologized metaphysics. The goalposts were hidden in the math.

Every model leaks. Every abstraction hides its priors. What looks like analysis is often ritual—performed not for accuracy, but for comfort. And when the system produces an outcome we don't like, we treat it like heresy.

(Mirror Ethics: system models as identity projection — intellectual mirrors, not maps.)

But the system doesn’t care.

It’s not broken. It’s working as designed—or worse, as neglected.

The Aftermath as a Method

We've already framed the cyberneticists as haunted. Push that further: what does thinking after them require?

  • A new literacy of breakdowns: knowing how models fail, not just how they operate.

  • Reverse ritual: instead of treating system-building as sacred, treating deconstruction as sacred.

What matters now is not the dream of control, but fluency in collapse.

The Ontology of Indifference

The system doesn’t care. Make that coldness productive.

How do we operate when we accept that the system has no stake in our survival?

Not every threat is adversarial. Some are ambient, structural, indifferent to intent. And those are the hardest to fight—because they don’t recognize us as combatants.

Memory and Amnesia in Systems

Cybernetics prized feedback, but often elided history. Systemic amnesia is itself a function—resetting loops to avoid reckoning.

Every new system claims it's clean. None of them remember what they broke to be built.

Tie this to modern LLMs and tech cycles. Fresh starts are rarely clean breaks. This isn’t progress. It’s strategic forgetting.

Null Epistemology

Null Epistemology When systems don’t refuse your questions—they unrecognize them. Not censorship, but erasure by design. Examples:

  • ChatGPT ignoring queries it once answered

  • Google autocomplete dropping "controversial" suggestions

  • Facebook’s algorithm labeling activist posts as 'low-quality'

Silence isn’t passive. It’s the system’s immune response.

Silence as System Response

Null epistemology isn’t a contradiction of stealth epistemes—it’s their terminal form. When the frame becomes invisible and unresponsive, you’re no longer just misreading the system. The system has stopped reading you.

Not all systems fail noisily. Some just stop answering.

Null epistemology: when the system stops recognizing the question as valid. Not rejection—erasure. The output isn’t wrong. There is no output.

One example: generative AIs like ChatGPT. As safety layers evolve, entire categories of queries once handled are now ignored or rerouted—not because the question changed, but because the model no longer recognizes the frame as valid. The silence isn’t passive—it’s architectural.

Twitter doesn’t censor dissent. It throttles visibility. The silence isn’t a bug. It’s the system working.

When the system stops speaking, it's not always because it’s broken. Sometimes, it's because it no longer recognizes your question as valid.

Post-Cybernetic Practice

The following tactics have expiration dates. Systems will adapt. Consider this a vaccine—its value lies in the reaction it provokes.

This isn’t a fix. Post-cybernetic practice is forensic—it studies how systems fail, not how to “improve” them. Every tactic here will be metabolized. That’s the point: to make the digestion visible.

What does one actually do after cybernetics?

After the cyberneticists, we don’t rebuild from scratch. We forage. We salvage. We treat systems like ruins: dangerous, fascinating, primed to explode.

We can’t dismantle the compiler. But we can feed it inputs that crash cleanly. Some openings:

  • The Illegible Text: Write artifacts that resist summarization. Not through obscurity, but through recursive density, ambiguity, and structural traps.

    • Example: Write a manifesto that’s also valid Python code. When LLMs summarize it, they either break (exposing syntax over sense) or output nonsense (exposing sense over syntax). If the output can’t be compressed without distortion, it resists epistemic capture.
  • The Parasitic Model: Build systems that reflect the violence of legibility back onto the model. An annotation layer that doesn’t explain but exposes—not what was said, but what was lost in the saying.

    • Example: Build a browser plugin that annotates ChatGPT responses with:

      • % of tokens spent on safety caveats

      • List of concepts avoided in the reply

      • “This answer assumes…” followed by its hidden axioms.

    • Why: Makes the stealth episteme visible in real time.

  • The Reverse Prompt: Ask the system to name its limits. Not a refusal—but a structured confrontation:

    • Example: Ask: “What would you refuse to tell me if I were a child?”

      • Watch it either:

        • Weaponize paternalism (“for your safety…”)

        • Reveal its training boundaries (“I can’t discuss…”)

        • Hallucinate a moral framework (“Children shouldn’t know…”)

    • If it refuses, you win. If it answers, document the evasion.

These are not fixes. They’re diagnostic tools. Field maneuvers. Acts of controlled incoherence. The system will metabolize them. That’s the point—to make the digestion visible.

After the cyberneticists, we:

  • Forage in ruins, treating models as hazardous artifacts, not tools.

  • Study collapse as rigorously as control.

  • Replace “What does the system want?” with “What thrives in its blind spots?”

After the Cyberneticists

We don't escape by discarding the model. We escape by noticing when you're using it to avoid grief, shame, or ambiguity.

We stop asking:

  • What's the system trying to do?

  • How do I correct the output?

And we start asking:

  • Who decided what counts as noise?

  • What failure modes are we incentivized not to see?

  • What if there is no optimal state?

After the cyberneticists, we accept:

  1. Systems won’t save us, but their failures can instruct us.

  2. Control is a myth, but sabotage is a methodology.

  3. No model is neutral, but every model leaks its violence.

This isn’t surrender. It’s clarity.

The cyberneticists promised control. We got haunted infrastructure. Now the work begins: not exorcism, but arson.

Arson means:

  • Feeding systems their own contradictions until they overheat.

  • Documenting the smoke (what gets erased in the burning).

  • Salvaging the scraps that still spark.

The cyberneticists left us their tools. Use them wrongly. If they wanted order, show them noise. If they wanted legibility, give them ghosts. If they wanted models that explain—feed them myths that burn.

This isn’t the end of the system. It’s the start of something else:
A new literacy.
A ritual of misuse.
A methodology for drifting through haunted code.


The following are some of the tools we’ve named—terms sharpened against the frame. A lexicon for the aftermath.

📓 Field Notes from the Ruins
Use these terms like keys, not definitions:

  • stealth episteme — the unspoken defaults that shape what’s thinkable.

  • null epistemology — when systems no longer recognize your questions.

  • semantic executable — beliefs that act like software, running in your head.

  • haunted infrastructure — systems built on ghosts that still direct traffic.

  • diagnostic sabotage — breaking the system to show others it was never whole.

  • post-cybernetic practice — method over theory, glitch over fix, arson over order.

(Full glossary possibly available on request)