Once more, dear friends
This might be one of the last times I let the model write something fully — and I think this one only truly works in a way I’m happy with now because I wrote a massive thread back on bluesky, which I then fed into it as framework. I might pivot here after this. I might stop writing here after this. Who knows what the future holds.
::: {.bluesky-wrap .outer style=“height: auto; display: flex; margin-bottom: 24px;” attrs="{"postId":"3lniidwcegk2r","authorDid":"did:plc:dki5xu3vgyo7ubl7vaw55zzq","authorName":"neutral","authorHandle":"neutral.zone","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:dki5xu3vgyo7ubl7vaw55zzq/bafkreiatfsiaurf42wc47rtfm6tvkt7qujuzvcgjeil5htrq66f3a6e6pq@jpeg","text":"AI-assisted search suffers from the classic \"we’ll deploy the beta and let usage smooth out the rough edges” problem, run amok. It’s a category problem. It’s never going to be fixed through training alone. Believing otherwise is essentially an article of faith backed by monetary incentives.","createdAt":"2025-04-23T15:15:53.623Z","uri":"at://did:plc:dki5xu3vgyo7ubl7vaw55zzq/app.bsky.feed.post/3lniidwcegk2r","imageUrls":[]}" component-name=“BlueskyCreateBlueskyEmbed”} ::: iframe ::: {#app} ::: ::: :::
We Taught a Mirror to Speak and Called It a Mind
AI-assisted search suffers from the classic "we'll deploy the beta and let usage smooth out the rough edges" problem, run amok. It's a category problem—the issue can't be solved through training alone. Believing otherwise is an article of faith backed by monetary incentives.
The dominant framing is wrong. They think (or are marketing) that they've built a "search assistant," when what they've really built is a pattern completion engine with no internal epistemic compass.
::: pullquote Epistemic: of or relating to knowledge; cognitive. :::
LLMs don’t know things. They don’t believe things. They encode language patterns—not facts, not justifications.
In human terms, they do not “code switch.” They do not have a “factual persona.”
These things aren’t rank bad actors. But they will never be able to do the following list:
They cannot recognize when input is out of scope.
They can generate confident prose, but can’t admit uncertainty.
They can’t say “I don’t know.”
::: pullquote "It’s a language model, not a brain model" is the clearest encapsulation. A ventriloquist’s dummy isn’t alive just because it says your name. :::
There’s no grounding. LLMs don’t have a world model, just a map of how language tends to behave when humans talk about the world.
They have no beliefs: there’s no persistent internal state that holds a belief across time, just statistical echoes.
There’s no intentionality: any appearance of goal-directed behavior is an illusion formed by autocomplete-plus-context. There’s no "there" there. Just whatever you set.
Everyone wants it to be a brain model because that’s the fantasy. That’s the sci-fi. That’s the product-market fit. So you get Google spinning search mistakes, memory organs, or “personalized” oracles, thinking they’re closing the loop.
In reality, they just taught hallucinations to mime a smile.
Thanks to too much money riding on this for the people developing it to admit the shortcomings in public view, we’re locked into the fact that many of the most capable orgs know this and are attempting to congeal their domain in subtle wayrs you’re not seeing.
That’s the real trick: this is an organ of narrative, which gets packaged just convincingly enough to justify massive contracts, investor faith, and content regurgitation through novelty (which some frontier labs, even if not yet, probably later anthropomorphize).
Self gets a pass. People anthropomorphized rocks back in the 70s, as a bit. This is probably why there’s a Fox Mulder-esque “I want to believe” hanging around this in both sides of the current camps.
We’re deep in the pet rock → prompt god continuum.
The anthropomorphizing impulse is the oldest trick in the cultural book. LLMs didn’t invent it. They just gave it better dialog.
It is always going to need a human in the driver’s seat until it has intentionality, and the problem is, not even “agentic” work will hab that.
::: pullquote Agency isn’t intention. Giving a model a to-do list with tools doesn’t grant it will; it just means it can loop through a plan. :::
You can fake agency with scaffolding. You cannot fake intentionality: it’s either there or it isn’t.
Without intentionality:
It can’t prioritize meaningfully.
It can’t judge relevance or importance beyond pattern frequency.
It can’t care, which is the root of discernment.
::: pullquote This is why human-in-the-loop isn’t just a safety net: it’s an epistemic requirement. :::
The real kicker is that this is the classic “automation pulls the ladder to expertise up behind itself” on steroids.
The thing is, it’s not even malicious: it’s structural.
When automation becomes the interface, expertise atrophies not because it’s obsolete, but because it’s made invisible.
We’re starting to wear out the practical point in terms people could understand, so I’m going to clip a pin in here. We can definitely go into how this is the same collapse pattern of automation replacing labor, just accelerated, abstracted, and obscured in fun new ways.
I’ve been cooking my brain trying to fully comprehend the landscape, because the dominant factions in discourse do not seem to understand the lay of the land—or rather, misrepresent the stakes as a means to rhetorically win in the battlefield of online.
We deserve better.