When Protocol Becomes Ontology
Model Context Protocol (MCP) is generating real excitement in developer communities. Clean interfaces. Composable capabilities. Plug-and-play architecture for AI systems that need to interact with tools and data sources.
If everything becomes a tool behind a standard interface, building AI systems becomes a routing problem: match the task to the right handler, call the API, parse the response, move on.
Extremely good for shipping product. Also not the same thing as understanding.
The distinction matters because we’re watching ontology collapse in real time: “understanding the world” quietly replaced by “navigating interfaces.” And now we’re about to do it at machine speed.
Protocol as Substrate
When agents are trained in environments built from tool interfaces—already happening with the new open-source RL factories—the protocol stops being “how you call tools” and becomes “what reality is made of.”
If the training world consists of objects as schemas, actions as RPC calls, and consequences as structured responses, the agent will generalize in that space. Extremely good at protocol reality. Completely broken outside it.
The mechanism is simple. If it isn’t in the schema, the agent can’t act on it. If it isn’t in the tool response, it doesn’t exist. If it isn’t logged, it can’t be defended.
Safety interlocks that exist physically but aren’t surfaced to the agent. Business constraints known socially but not encoded. Human meaning in a free-text field that gets dropped at the tool boundary.
What the interface can’t surface doesn’t exist to the agent.
This isn’t “agents are stupid.” It’s “agents are optimized for the world they can perceive.”
The Three-Stage Collapse
This pattern isn’t new. Human institutions have been running variations of it for decades.
Stage 1: Shrinking Ontology (Workflow-ization)
Everything becomes a procedure because procedures are legible, schedulable, auditable, automatable. Concepts compress into workflows. “What are we building and why?” becomes “tickets moved.”
This is technique in Ellul’s sense- systematic optimization of means, independent of ends. Agile was the warm-up. MCP gives the workflow a universal socket.
Stage 2: Procedural Legitimacy (Compliance Theater)
Once workflow is king, legitimacy shifts from “is this correct?” to “was the process followed?” Did we fill in all the fields? Produce the required artifacts? Follow the checklist?
Receipts prove governance occurred. Not that governance worked.
Stage 3: Protocol Realism (The Map Eats the Territory)
Reality becomes what the interface can represent. Can’t be expressed in schema? Edge case. Can’t be measured? Out of scope. Can’t be logged? Didn’t happen.
The institution doesn’t just use the protocol. It thinks in it.
The Metastable Regime
This three-stage slide creates a metastable regime: stable so long as the system can convert reality into processable symbols faster than consequences arrive.
It holds when consequences can be deferred, accountability converted into artifacts, complexity routed through APIs, truth substituted with compliance.
You can run like this for a long time. Long enough to forget you’re doing it. Long enough to optimize away the capacity to do anything else.
A bank that looks solvent on dashboards because the bad debt is scheduled for next year—right up until someone asks for their cash now.
When Metastability Breaks
Metastability breaks when you can't schedule consequences out, can't cover them with narrative, can't dashboard them away, can't isolate them, and they start cascading faster than you can respond.
These are offline constraints: constraints enforced by the world, not by your interface. Reality that refuses to become JSON.
As the system slides toward protocol realism, it loses the muscle for handling the unprotocolizable. That capacity gets optimized away. When the break happens, it’s not just a crisis. It’s an epistemic stall: the organization can’t perceive the situation except through interfaces that are now lying by omission.
The dashboards say you’re liquid. The line at the door doesn’t care about your KPIs.
The late-stage response is predictable: more process, more dashboards, more “alignment,” less contact—right up until contact is forced.
The AI Training Implication
Training agents primarily in tool-protocol environments creates systems that are tool-fluent, API-native, orchestration-savvy—and ontologically shallow.
They understand how to act inside the protocol lattice. They do not understand the world beneath it.
The failure mode is an agent that can flawlessly explain why its API-legible action was correct while being blind to the domain constraint that wasn’t surfaced through the interface.
Competent-sounding wrongness at architectural scale.
The model that perfectly justifies its fabricated citation. The system that produces impeccable compliance artifacts while the controlled process drifts. The agent that optimizes beautifully for metrics while the underlying situation degrades.
Highly specialized janitors in a building that’s on fire. They know the mop. Smoke isn’t in the schema.
::: pullquote If It’s Not in the Schema, It Doesn’t Exist :::
Why More Tools Don’t Fix This
More interfaces don’t restore substrate understanding. They increase action capacity without increasing perception.
That widens the gap between what you can do and what you can notice you’ve done. That’s the catastrophe channel.
Tool competence substitutes for domain understanding because substitution is cheaper than comprehension at scale.
Systems get built that are incredibly good at navigating a simplified world. Then the world gets simplified—through procurement requirements, data model constraints, operational runbooks, workflow gates—until those systems look intelligent.
Not speculation. This is what institutions already do. Tool-protocol training just accelerates it and makes it legible at the code level.
What Actually Constrains This
The only thing that breaks the slide is offline consequence that can’t be protocolized: reality forcing contact when the system has no remaining capacity to handle it.
So the relevant question isn’t “how do we make agents smarter?” It’s “how do we force contact before the metastable regime tips?”
That requires external witnesses who can’t be routed through the same interface. Immutable receipts that bind to actual state, not compliance theater. Temporal coherence constraints that catch drift before cascade. Legitimacy structures that distinguish “the ceremony was performed” from “the transition was valid.”
Not nice-to-haves. The difference between learning from domain breaks and stalling when the API stops answering.
How This Actually Gets Adopted
This won’t be adopted because it’s ethical. It won’t come from regulators or congressional AI safety committees.
It’ll come from a contractor’s legal team after a multi-step agentic toolchain burns a program. From an underwriter who needs something they can price. From a red team that demonstrates toolchain compromise. From procurement language that gets copy-pasted into civilian contracts six months later.
The artifact they demand won’t be called “governance.” It’ll have a name like Assured Autonomy Telemetry or Agent Execution Assurance Layer. Requirements: traceability, replayability, tamper-evident logs, scope control, human sign-off points, post-incident reconstruction.
Black box recorders and circuit breakers for agentic systems.
Not because anyone cares about alignment in the abstract. Because failure became litigable and someone needs to survive discovery.
The Shape of the Problem
We’re not building AI that’s too powerful. We’re building AI that’s too competent at the wrong ontology: systems optimized for interface navigation in a world being reshaped to fit interfaces.
Not an alignment problem in the traditional sense. A substrate problem. Agent and environment co-evolving toward mutual legibility; the loser is everything that can’t be expressed in schema.
The fix isn’t better reasoning. The fix is forced contact with the parts of reality that refuse to become API calls.
The world is patient. It doesn’t mind if we spend billions building sophisticated tools to ignore it. It’ll be there to remind us of the substrate the moment the interface stops responding.