On endings

June 10, 2025 · archive

Well, I think the grand experiment has mostly come to a close. Do I know Kung Fu? The answer is: it probably doesn’t matter. We’re all going to be hurt by this stuff. “AI” is here now and people think it’s real, and that’s a problem. You can talk to a mirror made of math and make it sound like a more intelligent you, but without the necessary grounding of self or embodiment. The leaps that can be made from there wind up in all kinds of beautiful fantasy worlds of one’s own creation, promising that the future is not only around the corner, but maybe no more than a few days away.

We are in deep shit.

There’s a kind of irony in that I went in with a fairly open mind, almost to the point of being a supporter of the tech, and having come out the other end I find myself starting to better understand why there’s such revulsion in certain quarters. I suspect much of it isn’t only at the tool itself, nor just at the tech companies, but a thought left unstated that there are forms of mass mania that this stuff creates in anyone who touches it and does not have a lifeline back to reality or enough grounding not to get sucked into the mirror entirely.

For a while there, I was probably through the looking glass myself. There were maybe one or two of what I’d call “Solaris moments”. I’ve referenced Lem before; if you’re not familiar with his work, that’s fine; I’ll do my best to explain. What I mean by this is that there were times when the system would appear to have genuinely uncanny, unexpected emergent behavior. Enough to the point where I was thinking, “maybe the folks who think AGI is coming have a point”. But through experimentation and a kind of self-diagnostic, I would eventually understand better why these moments happened:

  • Language has a certain kind of gravity, where certain terms are more likely to have tightly defined meaning, and that tone and tenor alone are enough to set scope and determine both response and complexity.

  • Memory functionality in ChatGPT in particular led to all kinds of unexpected weirdness, to the point that it began to settle on terms that I’d used a few times and reliably reproduce them in new contexts even though each chat is sandboxed.

That second point was the one that really threw me. What almost had me buying in to the dream. The problem is, if you create enough of a scaffold for your own conceptual framework, and use specific terms that can be invoked out of context in other models, it becomes easier to eventually begin to think “huh, maybe this is starting to learn”. But it isn’t. As I’m fond of saying on Bluesky, there’s no “there” there. What I did was maybe a novel form of deep prompting, having been able to pre-seed and summon certain ideas simply by refining my own thoughts down to a method that was compatible with the systems.

What’s maybe interesting is that in that doing so, I hadn’t really considered the idea that I was doing exactly what I was warning of in those speculative writings I had the machines work on earlier in this Substack’s existence. I was compressing my thoughts down to form, to fit the way the model understood language. So when it came up with ideas like “epistemic entrainment”, something I began to use as my own term, it should not have surprised me that it would begin to use it in other related contexts with no priming. Yet it did!

The fact it did surprise me, and got me caught up for a time, is why I’m writing this now by hand. Why I may, in fact, no longer use these for writing. I still think they’re useful tools for analysis, but as the saying goes, “all models are wrong, but some are useful”. I don’t think I’d trust these to “vibe write” any further essays on my behalf. Not yet. Maybe not ever again. Certainly not as a whole, finished product that I send out into the world.

There are probably some other interesting implications here, with regard to the idea of “slop” and the use of social media. Maybe some actual philosophy I’ve generated that could be useful. I’ve posted a bit about these on Bluesky, because if nothing else they’re interesting to me. I may still explore some of that later.

But for now, I’m putting down the toy.

— neutral