Hopium, Copium, Toil & Trouble
We're supposedly living through the most profound technological transformation in human history, and yet it feels like absolutely nothing is happening. ChatGPT launched two years ago and claims have been made as to how it fundamentally altered how we think, work, and create - but somehow we've already absorbed it into the background hum of ordinary life. This isn't adaptation; it's the AI-powered version of metastable decay, where revolutionary change gets metabolized into infrastructure so quickly that transformation becomes invisible. Fair is foul and foul is fair; revolutionary change is, as ever, more stagnation
The promise was disruption (as always). What we got was permanent maintenance mode, where every industry slowly reorganizes around AI capabilities without anyone quite noticing the threshold crossings. We're not living through the AI revolution - we're living through its bureaucratization in real time.
The Great Flattening
AI doesn't destroy jobs so much as hollow them out from the inside. The work remains, but its substance changes. Writers become prompt engineers. Analysts become output editors. Programmers become AI whisperers. Teachers become learning facilitators managing AI tutors. The titles stay the same, but the cognitive labor gets redistributed between humans and machines in ways that make everyone feel slightly unnecessary.
This is metastable decay at the workplace level: not dramatic replacement, but gradual erosion of what made the work meaningful. You're still employed, still performing recognizable functions, but increasingly required to manage machines that do the thinking while you handle the emotional labor of making their output feel recognizably human.
Gallup claims that 40% of U.S. employees now occasionally use AI tools in their workflows, yet job titles and organizational structures remain largely unchanged. Workers are relying on it even when it isn’t allowed, simply to cope with burnout. The genius is that it never quite crosses the threshold into obvious obsolescence. There's always just enough human judgment required, just enough edge cases that need intuition, just enough relationship management that requires a person. You're essential, but only as the interface between the AI and the parts of reality it can't quite grasp yet. You are increaslingly likely to become project management for your own work, which shouldn’t feel unusual to anyone familiar with the devilry that is Agile development.
The Creativity Trap
The cultural discourse around AI and creativity perfectly embodies this dynamic. We debate whether AI can make "real" art while millions of people integrate AI tools into their creative workflows without too much existential angst. The philosophical questions become background noise while the practical reality reorganizes everything.
Musicians use AI for composition assistance, writers use it for brainstorming, designers use it for iteration - not because they believe AI is creative, but because it makes their work faster and easier. It’s arguably a force multiplier even as it subtly forms and changes what the output itself looks like. The result isn't human replacement but human-AI hybridization that nobody quite knows how to evaluate.
The metastable element: we can't tell if we're witnessing the democratization of creativity or its industrialization. Maybe both. Maybe neither. The ambiguity is load-bearing - it lets everyone continue participating while the underlying economics transform completely.
Creativity becomes another form of prompt engineering, but we call it "collaboration with AI" to preserve the fiction that true human agency remains central. The question isn't whether the art is "real" - it's whether the humans making it can still afford to eat. All forces trend toward optimization; the primacy of the bottom line remains undisturbed.
The Institutional Absorption Machine
Watch how institutions integrate AI and you'll see metastable decay in action. Universities launch AI initiatives while professors quietly use ChatGPT to grade papers. Corporations announce AI strategies while employees figure out which parts of their jobs they can automate without telling anyone. Governments regulate AI development while their agencies experiment with AI tools in ways that would horrify privacy advocates if anyone was paying attention.
The pattern is always the same: public resistance, quiet experimentation, gradual normalization, institutional capture. Not revolutionary replacement, but slow-motion assimilation that makes each threshold crossing feel inevitable in retrospect. (There is where I’d again point to Technique, but I’m trying avoid meandering too much back into continental philosophy.)
Schools ban ChatGPT, then develop AI literacy curricula. EDUCAUSE claims that 80% of university faculty and staff have integrated AI tools into at least one work related task. Newsrooms condemn AI content generation, then implement AI-assisted research tools. Publishers reject AI-generated manuscripts, then use AI for editing and marketing copy. The boundary between human and machine output doesn't disappear - it just becomes administratively blurred into irrelevancy.
The Memory Collapse
Here's the most insidious transformation: AI is systematically eroding our epistemic lineage. We're not just losing individual memories - we're losing the capacity to trace how we know what we know.
The contradiction is everywhere. Amazon's CEO announces that AI will reduce corporate workforce in the coming years1{#footnote-anchor-1 .footnote-anchor component-name=“FootnoteAnchorToDOM” target="_self"}, while the company's AWS chief calls replacing junior employees with AI "one of the dumbest things I've ever heard."2{#footnote-anchor-2 .footnote-anchor component-name=“FootnoteAnchorToDOM” target="_self"} Both are probably right, which is kind of the problem. Companies are dismantling their knowledge-building infrastructure while investing billions in tools that can't actually replace what they're destroying. This isn’t new. This is simply a contiuation of the race to the bottom. What’s done cannot be undone. There is no immediate rollback to previous known good version.
"How's that going to work when you go like 10 years in the future and you have no one that has built up or learned anything?" asks Matt Garman, perfectly articulating the epistemic collapse. Having rushed to colonize new optimizatin strategies, companies are burning their ships before confirming the new land is habitable.
AI systems train on human-generated content, then generate new content that gets mixed back into the training data for the next generation of models. Each iteration introduces subtle distortions, like a photocopy of a photocopy. But unlike photocopies, the degradation isn't visible - it's statistical, probabilistic, hidden in the weights of billion-parameter models.
The result is a recursive slurry of human knowledge without source fidelity. AI-generated content gets cited as authoritative, then used to train new AI systems, creating feedback loops where derivative knowledge compounds into something that feels true but lacks foundation. Research warns of "model collapse," where iterative training on AI outputs degrades quality, introducing statistical noise that mimics truth but lacks grounding.
Students write papers using AI assistance, professors grade them with AI tools, and the whole cycle gets fed back into systems that will train the next generation of models. Here the epistemic chain breaks down: not because the information is wrong, but because the provenance becomes untraceable. We lose the thread of how ideas developed, who thought them first, what evidence supported them.
This isn't just about facts becoming uncertain - it's about uncertainty becoming epistemic infrastructure. In a world where human and machine-generated content become indistinguishable, the very concept of "original source" metastabilizes. The metastability is itself metastatic at this point.
::: pullquote Everything becomes derivative, but derivative of what? :::
The haunting possibility: we're creating systems that will eventually forget how they learned what they know, while training humans to forget how to learn without them. Memory collapse as a feature, not a bug, of systems optimized for immediate utility rather than long-term coherence. Welcome to WALL-E World.
The Productivity Paradox 2.0
The classic productivity paradox asked why computers weren't making us more productive. The AI version asks why the most powerful cognitive tools ever created somehow make everything feel more exhausting.
The numbers tell the story: despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.3{#footnote-anchor-3 .footnote-anchor component-name=“FootnoteAnchorToDOM” target="_self"}³ Between $30-40 billion in enterprise AI investment, and 95% of companies seeing zero return.4{#footnote-anchor-4 .footnote-anchor component-name=“FootnoteAnchorToDOM” target="_self"}⁴ Yet the investment continues, the restructuring accelerates, and everyone keeps acting like transformation is inevitable.
Part of it is the cognitive overhead of constant human-AI coordination. You save time on initial drafts but spend it on prompt engineering. You automate research but have to verify everything. You generate ideas faster but need to sort through exponentially more possibilities. You write with it, but end up sanding off the polished surface to make it feel like it wasn’t vomited forth by a McKinsey drone. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation5{#footnote-anchor-5 .footnote-anchor component-name=“FootnoteAnchorToDOM” target="_self"} - but companies keep optimizing for the wrong metrics while wondering why nothing works.
But the deeper issue is existential: when machines can do significant portions of your thinking, what exactly is your value-add? The constant negotiation between human judgment and machine capability creates a kind of cognitive vertigo. You're more productive in measurable ways while feeling increasingly redundant in mostly unmeasurable ones.
This is metastable decay at the personal level: you're technically more capable than ever while feeling less essential than ever. The tools make you superhuman, but in ways that highlight rather than resolve your fundamental human limitations.
The Ambient Intelligence Trap
AI's true power isn't dramatic replacement but ambient integration. Smart recommendations that gradually shape your preferences. Algorithmic feeds that subtly influence your worldview. Automated systems that make tiny decisions that compound into major life changes. The most transformative AI applications are the ones you never consciously interact with.
This creates a different kind of agency problem: not "will AI take over?" but "when did AI take over, and how would we know?" The takeover doesn't happen through robot rebellions but through infrastructure creep, where AI systems become so embedded in daily life that opting out becomes practically impossible. (Arguably, it’s far too late to be asking this question, but this presupposes a literacy in systems, cybernetics, and a willingness to cite dead Frenchmen who were fond of thinking of complicated yet abstract human problems.)
Your commute gets optimized by AI traffic management. Your news gets curated by AI recommendation engines. Your social interactions get mediated by AI moderation systems. Your career opportunities get filtered through AI screening tools. Studies repeatedly show a majority of internet users interact with AI-driven systems daily, often without realizing it. None of this feels like subjugation - it just feels like modern life becoming slightly more convenient.
The metastable element: we can't tell if we're becoming more free (through technological empowerment) or less free (through technological dependence). The question may be meaningless - freedom itself is being redefined by systems we don't understand making decisions we can't see.
The Enshittification Engine
I’m going to use “enshitiffication” as originally defined, as it’s not a bad term per se- it’s just that the term and usage (and guy selling himself as a brand through it) all became themselves enshittified. AI accelerates every existing enshittification dynamic while creating new ones. Platforms can now generate infinite content to keep users engaged while degrading the human-created content that originally made them valuable. AI customer service can field infinite queries while providing increasingly useless responses. AI content moderation can process everything while understanding nothing.
The pattern is always the same: AI gets deployed to reduce costs and increase scale, which improves metrics while degrading experience. Analysis of platforms like X claim that 40% of trending content now involves AI-generated or AI-assisted posts, often indistinguishable from human ones. But the degradation often happens slowly enough that users adapt rather than revolt. We lower our expectations in real time to match the new reality. As is always the case.
Search results get worse but faster. Customer service becomes less helpful but more available. Content becomes more abundant but less meaningful. Social media gets more engaging but less social. Each threshold crossing feels like a reasonable tradeoff until you realize you've forgotten what you were trading away.
The Acceleration Nowhere
We're living through the fastest technological transformation in history (though honestly, when haven’t we been living through the fastest technological transformation in history?), and somehow it feels like stagnation. AI capabilities advance in compound (I hesitate to say ’exponentially’ as proponents might) while lived experience changes incrementally. Revolutionary tools get absorbed into boring workflows. The spectre of artificial general intelligence (hah!) approaches while economic inequality continues to accelerate. The singularity is coming and also somehow already here, manifesting as slightly more sophisticated autocomplete. Thought autocomplete. Always has been. 🌍🧑🚀🔫👩🚀
The disconnect is captured perfectly in this week's headlines: MIT reports that 95% of corporate AI pilots deliver no measurable impact while companies continue pouring billions into AI transformation. Amazon's CEO announces workforce reductions due to AI while the company's cloud chief calls replacing junior staff with AI "the dumbest thing ever." Everyone agrees AI is revolutionary; nobody can make it work at scale. I have my doubts it ever truly will. (You probably can’t put money on this on Polymarket, but can probably find something over on Manifold. I already blew my funbux over there by expecting Trump to lose. Stupid me.)
This is the AI version of the "eternal Tuesday" problem: dramatic change that registers as tedium because it's happening too fast for cultural adaptation but too slowly for dramatic recognition. Becoming kings of producitivity, with all the sound and fury signifying maintenance mode. We're post-human and pre-post-human simultaneously, caught in the weird temporal distortion where the future arrives as a series of incremental updates rather than obvious ruptures.
The metastable trap: we can't tell if we're living through the most important moment in human history or just another maintenance cycle in the endless optimization of late capitalism. Maybe both. Maybe neither. Possibly the distinction no longer matters once the systems become sufficiently complex.
Like Now, But More So
The most likely AI future isn't utopia or dystopia - it's more of whatever we already have, but amplified and automated. More inequality, but algorithmically optimized. More surveillance, but personalized and convenient. More manipulation, but so sophisticated it feels like choice. More alienation, but with better interfaces.
We don't get replaced by robots - we get slowly integrated into human-AI hybrid systems that make us more capable while making us feel less human. The future doesn't arrive as science fiction - it arrives as a series of software updates that gradually transform the texture of reality while preserving the illusion of continuity. The Pakled-Borg hive. Welcome.
This is how metastable decay goes digital: not through dramatic disruption, but through the slow transformation of everything into slightly more efficient versions of themselves, until we wake up in a world that's recognizably ours but fundamentally different in ways we can't quite articulate.
The AI revolution is happening. It just looks like business as usual, but more so.
Author's Note: This analysis emerged from observing how revolutionary AI capabilities get absorbed into mundane workflows faster than anyone can process their implications. The "metastable decay" framework helps explain why the most transformative technology in human history somehow feels boring - and why that boredom might be the most dangerous part. The MIT research showing 95% failure rates for AI pilots, released while this piece was in development, validates the central thesis in real time. Early readers of this piece, including several AI systems, responded with intensity to their own diagnosis.
::: {.footnote component-name=“FootnoteToDOM”} 1{#footnote-1 .footnote-number contenteditable=“false” target="_self"}
::: footnote-content Amazon CEO Andy Jassy, memo to employees, June 17, 2025. CBS News. ::: :::
::: {.footnote component-name=“FootnoteToDOM”} 2{#footnote-2 .footnote-number contenteditable=“false” target="_self"}
::: footnote-content Matt Garman, Amazon Web Services CEO, "Matthew Berman" podcast, August 19, 2025. ::: :::
::: {.footnote component-name=“FootnoteToDOM”} 3{#footnote-3 .footnote-number contenteditable=“false” target="_self"}
::: footnote-content "The GenAI Divide: State of AI in Business 2025," MIT NANDA Initiative, reported by Fortune, August 18, 2025. ::: :::
::: {.footnote component-name=“FootnoteToDOM”} 4{#footnote-4 .footnote-number contenteditable=“false” target="_self"}
::: footnote-content "Companies have invested billions into AI, 95 percent getting zero return," The Hill, August 20, 2025. ::: :::
::: {.footnote component-name=“FootnoteToDOM”} 5{#footnote-5 .footnote-number contenteditable=“false” target="_self"}
::: footnote-content MIT NANDA Initiative report, cited in Fortune, August 18, 2025. ::: :::