Invisibility Is Not Maturity

March 9, 2026 · archive

Venkatesh Rao and Patrick Nast published a two-part essay series, “Theorizing Protocolization,” through the Summer of Protocols project. The essays introduce protocolization as the deep planetary process by which technologically mediated human behaviors get metabolized into reliable coordination infrastructure. Protocols trend toward invisibility — muscle memory, buried pipes, anti-memetic boringness — and this invisibility is framed as the mechanism of progress itself. They call a successful transition to invisibility a “Whitehead advance,” after Whitehead’s observation that civilization advances by the number of operations we can perform without thinking about them.

It’s a beautiful frame. It’s also a political move disguised as a descriptive claim.


The Steelman

Let me be clear about what’s right here, because quite a lot is right.

Protocols are under-theorized in the domains that build and deploy them, relative to how load-bearing they are. There’s serious work in STS and adjacent fields — but it rarely crosses the membrane into the engineering and governance imaginaries that now want to reinvent “protocol studies” from scratch. The observation that protocols constitute the “mesopelagic zone” of technological civilization — the bristlemouth fish that anchor the whole ecology while everyone films the sharks — is apt and well-articulated. The concept of “protocol tangles,” where multiple infrastructures converge and become mutually illegible, captures something real about the current moment. And the instinct behind their “Atomic Protocol Questions” initiative — that formal theory needs grounded, concrete problems to avoid floating into pure abstraction — is methodologically sound.

The problem is not the observation. The problem is what happens when they turn it into a normative principle.


The Smuggle

The essay treats invisibility as the telos of protocolization: a protocol “works” when it fades into the background. They don’t merely observe that successful protocols become ambient. They argue that invisibility is a design requirement — that “the very reliability of a protocol rests on its invisibility, and visibility can lead to fragility and unreliability.” They instruct us to keep protocols in peripheral vision, warning that looking too directly causes them to falter.

This is where the descriptive claim becomes a normative one, and where “progress through invisibility” starts doing something other than describing progress.

To be fair, they nod at governance costs. They coin “overprotocolization” and point to vaccine denialism and nuclear risk perception as cases where invisibility went too far. But it’s framed as a special pathology — what happens when invisibility overshoots — rather than the default condition of power operating through ambient infrastructure.

Invisibility isn’t just reduced cognitive load. It’s a distribution of noticing. Every protocol that fades into the background fades for someone — and remains starkly, coercively visible to someone else. The person who doesn’t think about traffic protocols is the driver with a license, insurance, and a car that passed inspection. The person who can’t stop thinking about them is the one who got pulled over, or the one whose neighborhood got bisected by a highway, or the one who can’t afford the car that makes the protocol ambient in the first place.

When you frame invisibility as maturity, you’re not just describing how protocols work. You’re providing cover for how enforcement disappears.


The History They Skipped

The essays use hand-washing as their go-to example of a “Whitehead advance” — a behavior that faded into infrastructure and made the world better. Clean water, soap everywhere, reduced cognitive load, improved public health. A protocol so successful it became invisible.

But the actual history of clinical hand-hygiene protocolization begins with Ignaz Semmelweis, who in 1847 demonstrated that doctors washing their hands between the morgue and the maternity ward dramatically reduced maternal mortality. For this, he was ridiculed, professionally ostracized, and eventually committed to an asylum, where he died.

The point is not that nobody ever washed hands. It’s that institutional medicine resisted mandated practice until the questions of authority, evidence, and enforcement were settled. The hand-washing protocol did not become invisible through gentle diffusion. It became invisible after a political fight about who had the power to mandate the behavior, who was being accused of negligence, and whose professional identity was threatened by compliance. The “Whitehead advance” came only after the governance question was settled — after the fights about authority, evidence, enforcement, and institutional resistance had been won and lost.

Every protocol that is now ambient was once contested. The invisibility is not the progress. The invisibility is what happened after progress — after someone fought about who enforces it, who bears the cost, and who can be held accountable when it fails. Framing the end state as the mechanism skips the part that actually matters.


The Missing Bibliography

There is a secondary issue worth noting. The essays claim that protocolization has received “almost no attention, either popular or scholarly.” This is an explicit claim to make without citing Susan Leigh Star’s work on infrastructure, Bowker and Star’s Sorting Things Out, Alexander Galloway’s Protocol: How Control Exists after Decentralization, or James Scott’s Seeing Like a State.

These are not obscure references. Galloway’s book is literally titled “Protocol.” The entire field of infrastructure studies — now decades old — exists precisely to theorize the phenomena the essays claim nobody has examined.

Omitting this bibliography is itself a protocol move. “We discovered X” is a coordination hack: it forces anyone engaging with you to spend their oxygen reconstituting the prior art, which makes you the center of gravity regardless of whether the claim is true. It’s a lane claim dressed as a discovery, and it performs exactly the kind of power-through-invisibility that the essays celebrate as progress.


What “Atomic” Would Actually Mean

The second essay introduces Atomic Protocol Questions — grounded research problems meant to prevent formal theory from floating off into abstraction. The instinct is right. The execution isn’t there yet.

Their worked example is bus bunching, which they immediately decompose into control policy, sensing infrastructure, incentive design, UI legibility, and labor constraints. They explicitly define “atomic” as “not decomposable into simpler protocol questions.” The bus-bunching examples immediately decompose — by their own narrative — into discretion, observability, incentives, and legitimacy. An “atomic” question that decomposes into five independent research tracks on contact is not atomic in any meaningful sense. It’s a well-written research prompt.

An actually atomic protocol question would ship with hard constraints: critically, a required audit artifact — something the protocol must emit so its operation can be verified after the fact. But also: a system boundary (what’s inside and outside), the decision variables you’re allowed to touch, the signals you’re allowed to read, a measurable objective plus explicit non-objectives, hard safety and operational limits, a specified failure mode or adversary, and a stopping rule for what counts as solved.

Without that last element, “atomic” is just another way of saying “tractable enough to write a paper about.” With it, the question forces the researcher to confront the governance problem: not just how the protocol works, but how you’d know if it stopped working, and who gets to check.


Coordination That Fades vs. Enforcement That Disappears

There is a real and important difference between a protocol that reduces cognitive load and a protocol that makes its own enforcement illegible. The first is a genuine human benefit — nobody wants to consciously negotiate which side of the road to drive on at every intersection. The second is a governance failure that looks like maturity.

The question the “Theorizing Protocolization” essays never ask is the one that matters most: not whether protocols become invisible, but to whom, at whose expense, and with what recourse when they fail. If your formal theory of protocols has no account of inspectability, contestability, or audit, then what you’ve built is not a theory of coordination. It’s a theory of acquiescence.

Invisibility is fine. Uninspectability is not. The difference between the two is whether someone, somewhere, is required to produce a receipt.


The author is a Senior Reliability Engineer, UNIX graybeard and an independent researcher. His work on governance architectures for autonomous systems, including the Agent Governor enforcement kernel, is published on GitHub and Zenodo.