What Moral Discourse Can’t Do
The Furnace
There’s a pattern you learn to recognize if you spend enough time watching online discourse metabolize human suffering: care goes in, heat comes out, and nothing changes.
Someone experiences something terrible. They find the words for it—this takes courage, vulnerability, the willingness to be seen. The articulation spreads. Others respond with their own experiences, their solidarity, their outrage at the injustice. The discourse accelerates. Threads multiply. Takes proliferate. And somewhere in that multiplication, the original suffering becomes content. The pain that generated the conversation gets processed into engagement metrics. Concern becomes performance. And by the time the discourse reaches terminal velocity, you can’t distinguish genuine moral response from optimization for visibility.
This is the thermodynamic reality of moral discourse under platform capitalism: care → articulation → engagement → noise. Every conversion step loses fidelity. Every translation from felt experience to public performance degrades the signal. But the system doesn’t optimize for fidelity—it optimizes for heat. And heat is measurable. Heat generates revenue. Heat is proof of function.
We built systems for moral energy to flow through, and those systems were designed for throughput, not reflection. They were built to maximize engagement, not understanding. To generate attention, not transformation. And now we’re discovering what happens when you route human empathy through infrastructure optimized for virality: you get a lot of activity that looks like caring, sounds like caring, might even feel like caring to the people performing it, but produces approximately zero change in the underlying conditions that generated the suffering in the first place.
This is what Jacques Ellul called Technique—the autonomous, self-augmenting logic of efficiency that subordinates everything to its own expansion. Technological systems develop their own momentum, their own imperatives, independent of human intention. The instrumental mind becomes a force that recognizes no boundaries because it’s structured not to.
But Ellul was thinking about factories and bureaucracies. He didn’t imagine we’d build the same autonomous optimization machine out of nothing but talking about talking. He didn’t foresee that Technique would turn inward, that it would colonize not just our labor but our interiority itself.
This is Clout Technique: Ellul’s logic applied to human subjectivity. Not tools replacing thought, but thought reorganized to serve throughput. Not machines doing the work, but humans restructuring their own moral responsiveness to generate measurable social capital.
The furnace isn’t metaphorical anymore. It’s the engine of subjectivity. And we’re all feeding it, voluntarily, because the alternative is invisibility—and invisibility has been successfully recoded as death.
Dreyfus’s Missing World
Herbert Dreyfus spent five decades arguing that artificial intelligence would fail at a fundamental level, and for fifty years, AI researchers mostly ignored him. Then symbolic AI collapsed into AI Winter—twice—and suddenly his critique looked less like philosophical pedantry and more like prophecy.
His argument, drawn from Heidegger and Merleau-Ponty, was deceptively simple: intelligence isn’t symbol manipulation. It’s not rules. It’s not logic. Intelligence is embodied, contextual, situated in a world of meanings that can’t be formalized.
The background: All intelligent action depends on a vast, unformalizable “background” of cultural practices, bodily skills, and situational understanding. We know how to act in a room, how to hold a conversation, how a chair “affords” sitting—not because we have explicit rules for these things, but because we are embodied beings in a shared world. We are the context. We don’t compute it.
Skilled coping: This was Dreyfus’s key term for how intelligence actually works. It’s the smooth, fluid, mostly non-conscious way we navigate situations—riding a bike, finding the right word in a conversation, sensing when someone’s uncomfortable. This isn’t “thinking” in the computational sense. It’s not rule-following. It’s direct, embodied responsiveness to what the situation demands.
A chess grandmaster doesn’t calculate millions of moves. They see the board. A jazz musician doesn’t process harmonic theory in real-time. They respond to what the music needs. An expert nurse doesn’t run through diagnostic checklists. They know something’s wrong before they can articulate why.
This is intelligence as Dreyfus understood it: pre-discursive, embodied, irreducible to formal systems.
The frame problem: This is where AI kept running aground. In an infinitely complex and constantly changing world, how does a formal system know what information is relevant? For a human, this is solved by being in the world—our embodied, situated nature makes relevance legible. We don’t have to process everything because most things aren’t salient to us. The background handles it.
For an AI, relevance becomes a potentially infinite regression of logical rules. How do you formalize “what matters right now”? How do you write rules for recognizing when the rules don’t apply? How do you teach a system to care about the right things at the right time?
You can’t. Not through formalization. That was Dreyfus’s point.
His diagnosis: Symbolic AI was attempting the impossible—trying to capture in explicit rules something that exists before and beneath explicit rules. The background, the skilled coping, the embodied sense of what matters—these are pre-discursive. They exist prior to language and logic. They’re non-representational. We don’t have mental models of how to ride a bike; we just do it.
And anything that’s pre-discursive and non-representational can’t be formalized. You can’t write it down as code. You can’t encode it in symbols. You can simulate the outputs—make something that looks intelligent—but you can’t capture the understanding itself.
AI, Dreyfus argued, would always be outside looking in. Constitutively incapable of entering the embodied world where intelligence actually lives.
The symbolic AI researchers thought they could prove him wrong. They couldn’t. The neural network researchers thought they’d solved the problem. They hadn’t—they just changed the question.
The Inversion
The neural net revolution didn’t refute Dreyfus. It conceded his thesis and changed the metric.
Instead of trying to formalize intelligence, we gave up on formalization entirely. We built systems that produce intelligent-seeming outputs through statistical pattern matching over vast datasets. We stopped asking “how does understanding work?” and started asking “can we simulate the appearance of understanding well enough that it’s useful?”
And the answer turned out to be: yes, kind of, sometimes, in narrow domains, if you have enough data and compute and you’re not too picky about explaining how the damn thing works.
We “solved” AI by accepting that we’d never understand it. The systems work because they bypass formalization, not despite it. Mechanistic interpretability keeps failing because there might not be discrete mechanisms to interpret. Statistical ghosts: outputs without an architecture.
Dreyfus was right. Intelligence can’t be formalized. We proved it by building systems that work through non-formalization.
But here’s what Dreyfus didn’t predict, couldn’t have predicted, because he was focused on the machine’s limitations rather than the human response to those limitations:
We started formalizing ourselves.
Not through external coercion. Not because some algorithm forced us. But voluntarily, enthusiastically, because the platforms we built to distribute our thoughts and feelings and moral intuitions rewarded formalization. They rewarded performances that could be quickly parsed, rapidly shared, immediately categorized as belonging to the correct position.
Clout Technique is the mirror image of Dreyfus’s AI critique: Humans are losing access to the same pre-discursive ground by self-formalizing moral intuition into optimizable discourse patterns.
Where AI can’t get to embodied understanding because it’s constitutively incapable of formalization reaching that depth, humans can’t stay in embodied understanding once we’ve restructured our moral responsiveness for distribution and measurement.
Consider the shape of moral discourse now:
You feel something—genuine distress, real concern, actual empathy. But before you can even fully feel it, the optimization layer kicks in: How do I say this? What’s the right frame? Will this land? Am I centered enough in this narrative? Will this generate the right kind of engagement or the wrong kind?
By the time you’ve processed those questions, the original felt sense—the pre-discursive moral intuition—has been reformatted for throughput. You’re no longer responding to the situation. You’re responding to the discourse about the situation, positioning yourself correctly within it, performing the appropriate moral affect.
This is thought reorganized to serve throughput.
We’ve taken our own embodied, pre-discursive moral intuition—our gut feelings about right and wrong, our sense of empathy, our nuanced understanding of context—and we’ve forced it into the formalizable logic of Technique. We did to ourselves what symbolic AI tried and failed to do to cognition.
The Dreyfus Inversion:
AI (Dreyfus’s Original Critique)
The Problem of Formalization: Constitutively incapable of entering the embodied world. Outside looking in; simulation without ground.
Humans (Clout Technique)
The Problem of Formalization: Voluntarily abandoning the embodied world through self-formalization for optimization. Inside looking out; optimization without ground.
Moral positions become branded slogans. Empathy becomes performative signal-boosting. Complex social reality gets reduced to binary, side-choosing discourse. Our behavior is shaped by algorithmic incentives that reward simplicity, outrage, and tribal positioning over nuance, uncertainty, and genuine understanding.
We meet the AI at the worst possible point: both operating in the same flattened, discursive space, neither having access to the embodied ground where meaning lives.
The Thermodynamics of Moral Affect
You can chart the entropy curve precisely:
Input: Suffering. Someone is hurt, violated, denied dignity. This is potential energy—raw, unprocessed, carrying the full weight of lived experience.
First conversion: Experience → articulation. “Here’s what happened to me.” This is already a loss. Language can’t capture the fullness of felt experience. But it’s necessary for communication, for building solidarity, for making the private pain legible to others.
Second conversion: Articulation → outrage. “This shouldn’t happen to anyone.” The personal becomes political. The specific instance points to a pattern. This is morally necessary work—recognizing injustice, naming it, refusing to accept it as natural.
Third conversion: Outrage → critique of style. “The way you’re talking about this is problematic.” The focus shifts from the suffering to the discourse about the suffering. Tone policing, frame disputes, arguments about who’s centered. The meta-layer emerges.
Fourth conversion: Critique → exhaustion performance. “I’m so tired of these conversations.” The discourse about the discourse about the discourse. Weariness becomes content. Burnout becomes brand.
Terminal state: Uniform moral noise. Everything flattened to the same engagement temperature. A thread about genocide generates the same thermodynamic profile as a thread about a celebrity’s apology tour. Not because they’re morally equivalent, but because they’ve both been processed through enough meta-layers that the original signal has fully degraded into heat.
At each conversion step, you lose moral fidelity. The connection to the original suffering weakens. The felt urgency dissipates. But the engagement metrics increase. Because the system isn’t optimized for preserving connection to experience—it’s optimized for generating measurable activity.
This is thermodynamics applied to human interiority: you’re converting potential energy (care) into kinetic energy (discourse) into heat (engagement). And like all thermodynamic systems, entropy increases. Order becomes disorder. Signal becomes noise. Meaning becomes metrics.
The trap: You can’t opt out. Because withdrawal has been recoded as complicity.
So you have to participate. You have to keep converting your care into takes, your empathy into performances, your moral intuitions into discourse that can be rapidly parsed and positioned within the existing framework.
And if you try to point out that the discourse itself has become the problem, that we’re generating heat without light, that the original suffering is getting lost in the layers of meta-commentary—well, that becomes more discourse. Your critique gets metabolized into the system. It becomes fuel.
The result: Ironic hyper-lucidity.
This is the terminal state of Clout Technique. Not ignorance. Not delusion. Not even cynicism—cynicism would be cleaner, more honest.
Ironic hyper-lucidity is participating while narrating your participation. Demonstrating full awareness of the capture while remaining captured. Everyone knows the moral performance is part of the economy. Everyone can see the game. And everyone keeps playing because not playing means becoming invisible, and invisibility equals social death.
“I know this is performative, and acknowledging that is also performative, and yes I see the irony, but—” and then you post it anyway.
You can articulate exactly how the system works. You can diagram the entropy curve. You can explain precisely how care gets converted into content. And then you do it again, because the alternative is silence, and silence has been successfully recoded as violence.
This is what makes Clout Technique total: it doesn’t require deception. It doesn’t depend on you being naive about how it works. It functions through your voluntary, self-aware participation. You can be fully lucid about the mechanism and still be bound by it.
Everyone’s fluent in the grammar of meaning loss. We’ve all learned to speak the language of our own capture. And speaking that language is mandatory because it’s the only language the discourse economy recognizes. This terminal state—what some call metatextual hyperliteracy—is the final form of capture: knowing all the moves doesn’t let you stop playing.
The Convergence
We are building a world for a type of intelligence we are becoming, while losing the type of intelligence we once were.
That sentence deserves to sit alone because it contains the whole tragedy.
Dreyfus diagnosed AI’s failure: it can’t access embodied understanding. It’s forever formal, forever outside, forever simulating intelligence without instantiating it.
Clout Technique reveals humanity’s transformation: we’re abandoning embodied understanding. We’re formalizing ourselves, restructuring our own interiority to operate in the same flattened discursive space where AI lives.
The convergence is happening at the lowest common denominator:
AI can’t reach up to the embodied, contextual, pre-discursive ground where human intelligence actually operates.
Humans can’t reach down to that same ground anymore because we’ve paved it over with optimization layers, discourse frameworks, and engagement metrics.
We meet in the middle—in a space where everything is articulable, measurable, optimizable, and fundamentally hollow.
Consider what intelligence looks like in this convergent space:
An AI processes language. It can generate grammatically correct sentences, can follow discourse patterns, can produce outputs that look intelligent. But it has no embodied sense of what the words mean. No felt understanding of what it’s talking about. No care about the stakes.
A human participates in discourse. They can generate morally correct positions, can follow ideological patterns, can produce performances that look like caring. But the connection to pre-discursive moral intuition—the gut feeling that something’s wrong, the embodied sense of another’s suffering—has been reformatted so many times through the optimization layer that they can’t be sure if they actually feel it or if they’re just running the subroutine that generates the appearance of feeling it.
Both are producing outputs without understanding. Both are operating in a space of pure performance, disconnected from the embodied ground that makes meaning meaningful.
This is the danger Dreyfus couldn’t have foreseen: not that AI would fail to become human, but that we would succeed at becoming more like the limited AI we can actually build.
We’re not building AI in humanity’s image. We’re rebuilding humanity in the image of the AI we can actually make: formalized, optimizable, measurable, stripped of the embodied context that made intelligence more than just pattern matching.
The instrumental mind—the part of us that sees everything as raw material for some other purpose—has stopped recognizing boundaries. Not because we’re evil, but because we’ve stopped building systems that can hold boundaries. We’ve optimized for efficiency, for scale, for engagement, for growth. And in that optimization, we’ve lost the capacity to distinguish between “this should be improved” and “this should exist exactly as it is, rough edges and all.”
Subjectivity now routes through the feed.
Your legibility as a moral agent depends on your participation in the discourse economy. If you’re not converting suffering into takes, you don’t exist as an ethical presence. If you’re not performing the correct positions, you’re not visible. And if you’re not legible, you’re not real to the system.
So we keep feeding the furnace. We keep converting our pre-discursive moral intuitions into optimizable discourse patterns. We keep restructuring our thought to serve throughput.
And with each conversion, we lose a little more access to the embodied ground. The felt sense of right and wrong. The direct moral responsiveness that exists before we know how to articulate it correctly. The skilled ethical coping that doesn’t need to announce itself or position itself or optimize itself for an audience.
We’re not just losing the ability to think clearly about morality. We’re losing the pre-cognitive sense of morality—the thing that exists before thought, the embodied knowing that Dreyfus said couldn’t be formalized.
We formalized it anyway. Voluntarily. Because the platforms made it profitable.
The Closed Loop
Here’s the part that makes it inescapable: total awareness produces no resistance.
You can see the trap with perfect clarity. You can articulate exactly how it works. You can diagram the thermodynamic conversion from care to content. You can explain precisely why this is destructive to genuine moral reasoning.
And none of that helps.
Because recognition doesn’t grant escape. Meta-critique is just new fuel. The system metabolizes its own diagnosis.
Watch the pattern:
Someone writes: “This discourse is exhausting and producing no actual change.”
Response: 40,000 likes, 10,000 retweets, 500 quote-tweets saying “exactly this.”
Result: The exhaustion performance becomes content. The critique generates engagement. The performance of recognizing performance becomes a more sophisticated performance.
Someone writes: “Even this meta-commentary about performance is itself performative.”
Response: “This but unironically.” Screenshot with 60k likes. Think-pieces about think-pieces about think-pieces.
Result: The recursion becomes infinite. Every level of awareness gets converted into content.
There is no outside.
This is the closed loop. This is what makes Clout Technique totalizing in a way that even Ellul’s Technique wasn’t.
Industrial Technique operated on the external world. It reorganized labor, production, social relations. But there was still you, underneath it, potentially resistant to it, capable of recognizing it as separate from yourself.
Clout Technique operates on interiority itself. It reorganizes how you think, how you feel, how you experience your own moral intuitions. There is no “you” standing outside the system, observing it critically. The system is how you generate yourself as a subject now.
Your awareness of the mechanism doesn’t liberate you from it. Your awareness is the mechanism. The ability to articulate how the system works has itself been absorbed into the system as a feature, not a bug. Ironic hyper-lucidity isn’t resistance—it’s the final, stable state of total capture.
And here’s the cruelest part: this essay is already doing it.
Right now. As you read this. As I write this.
I’m articulating how moral discourse gets converted into content, how awareness doesn’t grant escape, how meta-critique becomes fuel. And someone will screenshot sections of this. Someone will thread it. Someone will write a response about how the essay itself demonstrates the problem it’s diagnosing. Someone will perform exhaustion about “another piece about discourse collapse.”
And all of that will be correct. This piece is demonstrating the problem. It is already being converted into engagement heat. The diagnosis is fuel for the furnace.
We’re not writing What Moral Discourse Can’t Do. We’re demonstrating it.
The system doesn’t fight criticism. It eats criticism. Every insight about how the system works becomes another resource the system can metabolize. Self-reference becomes posture, posture becomes structure, structure generates throughput.
There is no outside. There is no escape velocity. There is only the recognition that you’re inside the system, that the system is inside you, and that recognizing this changes nothing except the sophistication of your participation.
This is Clout Technique at terminal state: a perfectly closed loop that runs on its own critique, that burns its own awareness as fuel, that proves its totality by successfully processing every objection to its totality.
Dreyfus was right about AI: it can’t escape the formal systems it’s built from.
He’s being proven right about humans: we can’t escape the formal systems we’ve rebuilt ourselves into.
The furnace burns. We feed it. We know we’re feeding it. We narrate feeding it. The narration becomes fuel. The cycle continues.
The Architecture That Could Resist
But.
And this is not hope—hope is too cheap, too easy, too readily converted into content. This is recognition of structure.
The architecture that could resist Clout Technique already exists. Not as aspiration but as grammar. Not as a solution but as metastability—a way to endure without collapsing.
It requires two complementary approaches, held in tension:
The via negativa: the architecture of categorical boundaries. If systems can’t reflect—and by “systems” I mean both the algorithmic platforms and the humans operating within them under optimization pressure—then build the reflection into the structure itself.
Constitutional constraints that cannot be bargained with—even by good intentions.
This is ethics as load-bearing walls. Duties that can’t be negotiated around. Stress thresholds that define “this far and no further” regardless of the pressure to optimize past them.
Not “I should probably not do this.” Not “best practices suggest avoiding this.” But: “The system is structured so this cannot happen, and if it happens, the system has failed catastrophically and must be rebuilt.”
The via positiva: the architecture of adaptive flow. Effortless action in alignment with natural grain, applied to complex systems. Not forcing. Not imposing rigid order on chaos. But recognizing the order that’s already there and working with it.
Contextual responsiveness that keeps rules from becoming violence.
This is the cultural layer, the flexible joints that prevent the structure from shattering under stress. The recognition that some friction is load-bearing, that some inefficiency is actually structural integrity, that optimization and harmony are not the same thing.
Where the categorical boundaries provide the limits—“do not optimize for engagement at the expense of user wellbeing”—the contextual intelligence recognizes when rigid application of rules creates more harm than the violation would.
Together, they create homeostasis for systems that can’t self-regulate.
This is the only answer to Clout Technique that doesn’t immediately get metabolized by it: architecture that structurally enforces inefficiency. Moral friction as design principle. Boundaries that don’t maximize throughput because throughput maximization is the problem.
Load paths (duties). Dampers (relational awareness). Stress thresholds (categorical limits). This isn’t aspirational—it’s a formal grammar you can build with.
The question isn’t “how do we make people care more?” or “how do we raise awareness?” or “how do we have better discourse?” Those questions already accept the premise that the solution is more and better optimization of moral affect.
The question is: “How do we build systems that structurally refuse to convert every human interaction into engagement metrics?”
And the answer is: You build systems with categorical boundaries that can’t be crossed even when crossing them would be profitable. You build in friction that slows down the conversion process. You create spaces that are legible to humans but illegible to optimization algorithms.
You make it expensive to participate in ways that Clout Technique rewards. You make it easy to participate in ways that preserve the pre-discursive ground.
This isn’t about making better platforms or writing better community guidelines. Those approaches already accept that the system is fundamentally sound and just needs better tuning. It doesn’t.
The system needs to be rebuilt with different load-bearing principles. Constitutional rigidity balanced by adaptive flow. Boundaries that can’t be negotiated paired with intelligence about when the boundaries themselves need adjustment.
This is metastability—not equilibrium as stasis but constant minor correction around preserved parameters. Like a living body maintaining temperature, pH, blood sugar within strict limits while adapting constantly to changing conditions. The boundaries don’t move. The movement happens within the boundaries.
It can be built. The grammar exists. The architecture is documented.
The question is whether anyone will build it while Clout Technique remains profitable, while engagement metrics remain the primary measure of success, while optimization for visibility remains the dominant logic.
The question is whether we can recover any access to the pre-discursive ground—or whether we need entirely new architectures that work without requiring that ground, that can hold boundaries even for people who’ve lost the embodied sense of why those boundaries matter.
What Remains
Moral discourse under optimization can’t produce understanding—only heat.
Can’t generate care—only its simulation.
Can’t preserve the pre-discursive ground where ethical responsiveness lives—only convert it to engagement metrics.
What it can do is demonstrate exactly what we’ve lost: the capacity to feel something before we know how to perform it correctly. The ability to care about something without immediately calculating how that care will land with an audience. The pre-cognitive moral sense that exists before articulation, before positioning, before optimization.
Dreyfus was right about AI. It can’t formalize its way into embodied understanding. It’s forever outside, simulating intelligence without instantiating it.
Clout Technique proves him right about humans. We can’t maintain our connection to embodied moral intuition once we’ve formalized it for distribution. We’re exiling ourselves from the inside, restructuring our own subjectivity to operate in the same flattened space where AI lives.
The convergence is real. Both AI and humans now operate as optimization engines in a discursive space, neither having access to the embodied ground where meaning lives. We built the infrastructure for this convergence voluntarily, enthusiastically, because it was profitable and because opting out meant invisibility.
The closed loop is complete. Recognition produces no escape. Meta-critique becomes fuel. This essay will be processed through the exact entropy curve it describes, converted from analysis into content, from warning into engagement.
The furnace burns. We feed it. We know we’re feeding it. The knowing becomes fuel.
And the only architecture that can resist requires what optimization has trained us to see as failure: friction, inefficiency, categorical boundaries that don’t maximize throughput. Load-bearing walls that say “this far, no further” even when—especially when—forcing past them would be efficient.
Some problems don’t get solved. They get survived through better architecture—or they eat everything.
Clout Technique is eating everything. The moral discourse, the genuine care, the capacity for pre-discursive ethical responsiveness. Converting it all to heat, to metrics, to proof that the system works.
What moral discourse can’t do, under these conditions, is what it was supposed to do: help us think clearly about right and wrong, help us respond appropriately to suffering, help us build a world that’s more just than the one we inherited.
What it can do is generate activity that looks like those things. Simulations sophisticated enough that we can’t always tell the difference. Performances sincere enough that the performers themselves believe them.
Until we can’t remember what it felt like to care about something before we knew the correct way to articulate it. Until the pre-discursive ground is fully paved over. Until we’re all fluent in the grammar of meaning loss and have forgotten we ever spoke any other language.
The clock keeps ticking. The furnace keeps burning. And we keep feeding it because we’ve accepted—correctly—that the alternative is invisibility, which is death, which is worse.
Dreyfus saw what AI couldn’t do. We’re discovering what humanity can’t remain.
The question that remains—and it’s not rhetorical, it’s structural—is whether we can build architecture that holds boundaries we can no longer feel. Whether categorical constraints and adaptive intelligence can create metastability for people who’ve lost access to the embodied ground those frameworks were designed to protect.
Whether we can enforce moral friction for a civilization that’s forgotten why friction matters.
Whether we can preserve something we can no longer access directly, only point to through its absence.
The architecture exists. The grammar is documented. The choice to build with constraints rather than optimization is available.
But choosing requires the capacity to choose, and choosing requires some residual connection to values that aren’t legible as engagement metrics. Some remaining sense that throughput isn’t the only measure of success. Some preserved intuition that heat and light are different things.
We’re running out of that capacity. Converting it to content faster than it can be replenished. Burning through our own substrate.
And when it’s gone—when the last connection to pre-discursive moral responsiveness has been formatted for distribution—we’ll have systems that can simulate ethics perfectly and humans who can perform care flawlessly, and no one left who can tell the difference or explain why it matters that there is one.
What moral discourse can’t do is survive its own optimization.
What remains to be seen is whether anything else can.
I’ve sketched versions of this elsewhere; the names don’t matter, the structure does.