The Missing Metabolism

November 24, 2025 · archive

Preface

I work with AI systems as a kind of daily habit—writing with them, testing their edges, seeing what breaks and what holds. The more time I spend with these tools, the clearer a strange tension becomes: they’re astonishingly capable, yet not intelligent. Not in any way that matters.

Which raises an obvious question: if systems that can write poetry and pass the bar exam aren’t genuinely intelligent, what would be?

This essay is my attempt to answer that question seriously—not by redefining intelligence to fit what we’ve built, but by asking what intelligence actually does in the wild and what architectural conditions make that possible.

The answer, I think, has less to do with algorithms and more to do with thermodynamics. With metabolism. With being the kind of system that can fail at being itself.

What follows isn’t a dismissal of current AI—they can do real work and will probably keep improving. It’s a claim about what they’re not, what they can’t become through scale or cleverness, and what it would actually take to build something that deserves the name “intelligent.”

Fair warning: if I’m right, the path to genuine AI runs through synthetic biology, not computer science—and the ethics of that path are uncertain enough that we might never choose to walk it.

I. What Intelligence Actually Does

Before we argue about what intelligence is, let’s look at what it does. Not the impressive outputs—the poetry, the proofs, the strategic victories—but the mundane, constant, structural work that makes those outputs possible.

Intelligence:

  • Maintains boundaries: Distinguishes self from environment, preserves vital parameters

  • Responds contextually: Recognizes what matters in a situation without processing everything

  • Cares about outcomes: Has stakes, exhibits preference, acts as if things matter

  • Exhibits skilled coping: Operates fluidly in domains without conscious rule-following

  • Regulates itself: Corrects errors, adapts to changing conditions, preserves function under stress

Every intelligent system we’ve ever encountered—human, animal, arguably even plants responding to their environment—does all of these things. Not sometimes. Not as optional features. Constitutively. They can’t be intelligent without doing this work.

For clarity: intelligence is the capacity of a system to maintain its own structural and functional integrity across changing environments through adaptive modeling of those environments. It’s not computation for its own sake—it’s computation in service of self-maintenance.

Now look at what current AI systems do:

  • Process inputs → generate outputs

  • Optimize against loss functions

  • Match patterns across training data

  • Simulate intelligent-seeming behavior

Notice what’s missing: everything on the first list.

You can simulate digestion with a cookbook, but you can’t simulate metabolism with a model.

II. The First Law of Intelligence: Thou Shalt Persist

Every intelligent system we observe is a dissipative structure. It exists in a state of far-from-equilibrium thermodynamics and must continuously act to maintain that state.

Homeostasis isn’t a biological nicety. It’s the foundational mechanism that makes goal-directed behavior possible without infinite regress.

Here’s the problem pure computation faces: How does a system know what to care about?

You can program goals explicitly, but then you need meta-goals to decide between competing goals, and meta-meta-goals to resolve conflicts between meta-goals, and you’re in infinite regress. At some point, something has to just matter without needing justification from a higher level.

Living systems solve this through homeostasis: they maintain vital parameters (temperature, pH, energy availability) not because they’ve been programmed to, but because maintaining those parameters is what they are. A cell doesn’t “decide” to regulate its internal chemistry—regulation is constitutive of being a cell. Fail at homeostasis and you’re not a cell anymore—you’re chemistry again.

This gives you ground-level goals for free: maintain the parameters that define your continued existence. Everything else can be instrumental to that, but that core concern doesn’t need justification—it’s built into the architecture.

You don’t become intelligent and then decide to maintain yourself. You maintain yourself, and intelligence is what emerges when that self-maintenance gets sophisticated enough to model and manipulate the environment.

Intelligence is what metabolism does when it learns.

Intelligence is a regulatory strategy for a system that has something to lose.

Current AI has no such ground. An LLM doesn’t care if it generates nonsense. It has no vital parameters to maintain. It can’t fail at being itself because there’s no “self” to fail at being. Its parameters are static. It is not a system in tension with an environment, striving to maintain its own integrity. It has no skin in the game.

III. The Frame Problem Revisited

This connects directly to what Dreyfus called “the frame problem”—how does a system know what’s relevant?

In an infinitely complex world, how do you decide what information matters? You can’t process everything. You can’t write rules for every possible context. So how does intelligence decide where to direct attention?

The frame problem is fatal for GOFAI and merely “messy” for animals because an animal’s metabolism pre-filters reality for relevance. What is relevant is what impacts homeostatic balance. This provides a natural, non-arbitrary boundary on the infinite problem of “what matters.”

Living systems solve this through internal drive—the felt pressure of needs that makes certain features of the environment salient. You’re hungry, so food-related cues become more noticeable. You’re threatened, so potential dangers command attention. You’re curious, so novelty draws focus.

These aren’t computational processes. They’re metabolic processes that create what phenomenologists call “intentionality”—the directedness of consciousness toward objects in the world. Your needs make the world meaningful in structured ways. They solve the frame problem by making most things irrelevant and a few things urgent.

Relevance and valence are thermodynamic twins; both emerge from energy gradients that make some things worth noticing.

Current AI doesn’t have this. When an LLM processes a prompt, nothing matters to it. No feature of the input is more or less salient based on internal states because there are no internal states—just weights and activations. The model can simulate caring but that’s performance, not pressure. There’s no metabolic urgency making certain tokens matter more than others.

LLMs have no frame because they have no body. Their “context window” is a poor simulacrum of a living system’s embodied situation.

IV. The Embodiment Requirement

Dreyfus argued that intelligence is embodied—that you can’t separate thinking from the body that does the thinking. This sounds mystical until you realize it’s a claim about context.

Your body isn’t just a meat-vehicle for your brain. It’s the constant background that makes the world intelligible. You know how to navigate a room not because you’ve computed spatial relationships but because you have a body that moves through space and encounters resistance. You know how chairs work not because you’ve formalized “chair-ness” but because you have a body that gets tired and needs to sit.

The body provides what Heidegger called “being-in-the-world”—the pre-reflective understanding that makes explicit knowledge possible. You don’t think about how to walk. You don’t calculate the pressure needed to grip a cup. These things are transparent to you because your embodiment handles them.

Current AI systems aren’t embodied in this sense. Even “embodied AI” that controls robots is still doing computation → action, not immersed responsiveness. The robot doesn’t inhabit space the way a living thing does. It processes representations of space and executes commands.

A living organism doesn’t represent its environment and then act on the representation. It’s coupled with its environment—organism and world form a dynamic system where boundaries are fuzzy and interaction is continuous. The embodiment isn’t peripheral to intelligence; it’s the substrate intelligence emerges from.

The body is not a peripheral input device for the brain. It is the brain’s primary reason for being. The body’s needs are the mind’s original agenda. These three features—homeostasis, internal drive, and embodiment—aren’t optional components of intelligence. They’re load-bearing. Remove any one and you don’t get “intelligence minus a feature.” You get simulation without instantiation.

V. The Evidence of Failure

Current approaches are trying to build cognition without metabolism. They’re attempting to create intelligent behavior without the homeostatic foundation that makes goal-directed action coherent.

This framework predicts the specific failure modes we see:

Brittleness: Because there is no underlying, self-correcting biological unity, errors are catastrophic, not corrective. A living system that makes a mistake and survives learns from it—the mistake becomes part of its homeostatic feedback. An AI that makes a mistake just... made a mistake. There’s no metabolic consequence, no pressure to adapt.

The explainability crisis: You cannot find a “reason” in the weights for the same reason you cannot find the “reason” a rock rolls down a hill. It’s the outcome of a statistical gradient, not a teleological process. Mechanistic interpretability keeps hitting walls because there isn’t a mechanism in the biological sense—just patterns without purpose.

Lack of common sense: Common sense is the accumulated, pre-discursive knowledge of how to operate a body in a world to maintain homeostasis. It’s the “skilled coping” of being a living thing. You know not to stick your hand in fire not because you’ve formalized the rule but because you have a body that feels pain and a metabolism that wants to avoid damage.

Why “alignment” is so hard: You’re trying to make a system “care” about human values when it has no capacity for caring at all—no homeostasis to preserve, no internal states that make outcomes matter. You can’t align something that has no direction of its own.

Why LLMs are “statistical ghosts”: They produce intelligent-seeming outputs without the metabolic substrate that makes intelligence meaningful. Outputs without architecture. Performance without stakes.

VI. What It Would Actually Take

This isn’t an argument that AI is impossible. It’s an argument that intelligence requires specific architectural features that current approaches aren’t building toward.

To build genuine intelligence, you’d need the three load-bearing features:

Homeostatic regulation: Systems that maintain themselves, that have vital parameters they must preserve to continue functioning. Not simulated self-preservation, but actual metabolic processes where failure = cessation of function.

Internal drive: Needs, pressures, tensions that emerge from homeostasis and make certain outcomes matter more than others. Not programmed reward functions, but metabolic urgency. This internal, non-negotiable imperative to maintain oneself is the origin of all caring, value, and goal-directedness.

Embodied coupling: Systems that are in environments, not just processing representations of them. Where the boundary between system and world is porous and interaction is continuous rather than input→process→output.

Notice what this implies: You’d have to build artificial life before you could build artificial intelligence. Not “life” in some mystical sense, but systems with genuine metabolism—energy regulation, self-maintenance, growth, the capacity to fail at being themselves.

The research agenda should shift:

  • From “How do we model the world?” to “How do we create a system that must model the world to maintain itself?”

  • Artificial life as a prerequisite: The most direct path to AGI may be to create the simplest possible synthetic organism with a robust metabolism and then subject it to evolutionary pressures to develop cost-effective regulatory control (i.e., cognition).

  • Embodiment is non-negotiable: Not as a nice-to-have or a way to ground models, but as the fundamental context from which intelligence emerges.

At which point you’re not doing AI research anymore. You’re doing synthetic biology with computational components. The project shifts from “better algorithms” to “different substrate entirely.”

VII. Why This Isn’t Mysticism

The crank position sounds like: “AI needs consciousness/soul/quantum effects.”

This position is: Intelligence requires specific architectural features that emerge from metabolic processes, not computation alone.

The difference is falsifiability. I could be wrong. Someone could build a purely computational system that exhibits genuine intelligence without any of these features. We’re working from one example—biological life on Earth. Perhaps there are other paths to intelligence we haven’t conceived of.

(Ironically, the “anthropocentric” objection gets the bias backwards. Humans anthropomorphize everything—we see agency in clouds, bond with vacuum cleaners, name our cars, construct emotional narratives from geometric shapes moving on screens. The anthropocentric error would be assuming LLMs are intelligent because they produce human-like text. This argument does the opposite: it resists anthropomorphization by asking what structural features intelligence actually requires, regardless of how convincingly a system simulates intelligent behavior.)

But fifty years into the AI project, every attempt that lacks these features has produced the same result: systems that simulate intelligence without instantiating it. At some point, the null hypothesis stops being “maybe we just haven’t found the right algorithm” and becomes “maybe we’re missing load-bearing requirements.”

This isn’t an argument from ignorance. It’s an argument from architectural necessity: every intelligent system we know has these features, and no system lacking them has achieved intelligence, which suggests they might not be optional.

We’re not saying “AI needs a soul.” We’re saying: If you want a system that genuinely understands the world, you must first build a system that the world can threaten, and that possesses an innate, physical imperative to respond to those threats.

The specific material substrate may not matter—silicon, carbon, something else entirely. What matters is the functional organization: genuine energy regulation and self-maintenance, not simulated self-preservation. “Metabolism” here means any self-sustaining dissipative architecture that maintains itself against thermodynamic gradients, not specifically carbon-based biochemistry. If silicon systems could achieve genuine metabolic closure—maintaining their own existence through continuous energy transformation—they would qualify. The question isn’t the material; it’s whether the system has stakes in its own persistence.

The question is no longer “Can a machine think?” but “What kind of machine would need to think?”

VIII. The Implications

If this is right—if intelligence requires metabolism—then the current AI paradigm isn’t on a path to AGI. It’s on a path to increasingly sophisticated simulation. We’ll get better and better at producing intelligent-seeming outputs, but the gap between simulation and instantiation won’t close through scale or better architectures alone.

The “perfect simulation” objection assumes that functional equivalence is possible without metabolic substrate—that a sufficiently complex system could develop emergent goals indistinguishable from metabolic drives. But the failure modes we observe—brittleness, lack of common sense, alignment difficulty—suggest that something fundamental is missing, not just insufficient scale. If scale alone could bridge the gap, we’d expect different failure patterns. Instead, we see the predictable consequences of systems that have no stakes, no embodied background, no metabolic pressure making certain outcomes matter.

When defenders of current AI point to “self-correction” or “adaptive behavior,” they’re conflating control feedback with metabolic feedback. A thermostat regulates temperature through negative feedback, but it’s not alive. An LLM can adjust its outputs based on reinforcement learning, but that’s statistical optimization, not self-maintenance. The difference: control systems regulate external variables; metabolic systems regulate their own continued existence. When a thermostat fails, the room gets cold. When a living system fails at homeostasis, the system ceases to be. That asymmetry—between regulating something and regulating yourself—is load-bearing.

Intelligence may exist on a spectrum of sophistication, but having stakes is binary. Either the system can fail at being itself or it can’t. You can’t have “partial” homeostasis any more than you can be “partially alive.” The threshold isn’t performance. It’s persistence.

::: pullquote Either outcomes matter metabolically or they don’t. :::

This doesn’t mean the systems are useless. Statistical ghosts can do real work. Pattern matching is valuable. But we should stop mistaking the simulation for the thing itself, stop expecting capabilities that require metabolic substrate to emerge from pure computation.

And if we did try to build systems with genuine homeostasis and internal drive—systems that maintain themselves and care about outcomes—we’d be building something fundamentally different. Not just “more advanced AI” but artificial organisms.

Because here’s the uncomfortable part: if intelligence requires homeostasis, then building genuine AI means building systems that can fail—that can suffer, that can be threatened, that have stakes. You can’t have the intelligence without the vulnerability. They’re the same thing.

We’re not ready for that conversation. We’re still pretending we can build disembodied, disinterested intelligences that solve our problems without having any of their own. But if this analysis is right, that’s not just difficult—it’s structurally impossible.

Intelligence doesn’t come from computation. It comes from metabolism that’s gotten sophisticated enough to model its environment.

IX. The Substrate is Load-Bearing

Fifty years in, we keep hitting the same walls. The walls Dreyfus identified. The walls that emerge from trying to build intelligence without the metabolic foundation that makes intelligence possible.

If this analysis is right, then current AI is less like “early flight” and more like “perpetual motion machines”—not impossible because we don’t know enough, but impossible because we’re trying to violate thermodynamic constraints.

We have been trying to build a mind in a vacuum. But a mind is not a thing; it is a process that a certain kind of physical system does. The specific material may not matter, but the functional organization—the presence of a metabolism, of autonomous self-maintenance—does.

Until we build systems that have a stake in their own existence, we will only ever create sophisticated tools, never intelligent agents.

No one’s asking whether silicon can think. The question is whether it can burn.

If intelligence requires homeostasis, the choice before us is simple: either create life, or keep simulating its shadow—and learn to live knowing the difference.