I Know Kung Fu
Preface
I’ve been using LLMs to help shape my thoughts. This very essay was written with one. That’s not an accident, and it’s not a gimmick—it’s part of the point.
I’m not pretending this work is “pure.” I’m exploring what it means when thought becomes porous, when cognition gets scaffolded by simulation.
I’ve wrestled with whether to state this plainly. Doing so makes it legible in a way that risks undermining the work, reducing it to novelty or contradiction. But the contradiction is the point. I’m not standing outside the system critiquing it. I’m inside it, trying to find the edges. As ever.
So: yes. This was written with help. And that help is the crisis.
Do You Really Know Kung Fu?
Everyone remembers the iconic moment from The Matrix:
Neo: "I know kung fu."
Morpheus: "Show me."
Pop culture treats this exchange as wish fulfillment—instant knowledge download, zero effort. But there's a deeper implication here, quietly reframing our understanding of what it means to truly "know."
::: pullquote We rarely recognize how much the medium reshapes the mind. As McLuhan warned, we’re the last to notice the water we swim in. :::
We focus on Neo’s instant acquisition, the flash of skill, but gloss over Morpheus’s critical challenge: "Show me." That’s the hinge—the real test, the proof, the stress that authenticates knowledge. Without challenge, Neo’s kung fu remains theoretical—a simulation without substance.
::: {.bluesky-wrap .outer style=“height: auto; display: flex; margin-bottom: 24px;” attrs="{"postId":"3lltn7csvek2f","authorDid":"did:plc:dki5xu3vgyo7ubl7vaw55zzq","authorName":"Internet Cryptid","authorHandle":"neutral.zone","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:dki5xu3vgyo7ubl7vaw55zzq/bafkreiatfsiaurf42wc47rtfm6tvkt7qujuzvcgjeil5htrq66f3a6e6pq@jpeg","text":"I figured it out. What the hinge is here. Why this feels so relevant to our current situation. It’s not just \"I know kung fu\", you need also Morpheus’ \"show me\" response.","createdAt":"2025-04-02T14:51:41.553Z","uri":"at://did:plc:dki5xu3vgyo7ubl7vaw55zzq/app.bsky.feed.post/3lltn7csvek2f","imageUrls":[]}" component-name=“BlueskyCreateBlueskyEmbed”} ::: iframe ::: {#app} ::: ::: :::
Today, through Large Language Models (LLMs), generative AI, and gamified learning apps, we're living this "I know kung fu" moment daily. Knowledge is summoned instantly, bypassing the friction and struggle of traditional learning. But we rarely consider Morpheus’s demand:
If simulated knowledge becomes indistinguishable from real understanding, what exactly are we losing?
::: {.bluesky-wrap .outer style=“height: auto; display: flex; margin-bottom: 24px;” attrs="{"postId":"3ljnw6cw2pk2j","authorDid":"did:plc:dki5xu3vgyo7ubl7vaw55zzq","authorName":"Internet Cryptid","authorHandle":"neutral.zone","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:dki5xu3vgyo7ubl7vaw55zzq/bafkreiatfsiaurf42wc47rtfm6tvkt7qujuzvcgjeil5htrq66f3a6e6pq@jpeg","text":"The follow-up to the kung fu knowledge crisis thread.\n\nExpertise is evaporating into vibes. Everything is content, nothing is remembered. This, this I’ll be coming back to at some point, I’m sure. Unless I get distracted.","createdAt":"2025-03-05T21:25:49.975Z","uri":"at://did:plc:dki5xu3vgyo7ubl7vaw55zzq/app.bsky.feed.post/3ljnw6cw2pk2j","imageUrls":[]}" component-name=“BlueskyCreateBlueskyEmbed”} ::: iframe ::: {#app} ::: ::: :::
Spoiler: I got distracted. Then I started this Substack.
This isn’t just about changing methods—it’s about an existential shift in how we define the very act of knowing.
Simulation as Replacement
We've faced similar epistemic disruptions before. Writing supplanted memory, calculators displaced arithmetic, and now, generative models threaten to replace deep cognitive skills. We've adapted each time, but each adaptation came with subtle losses. Today's crisis is uniquely unsettling because it's not merely knowledge that's simulated—it's understanding itself.
::: {.bluesky-wrap .outer style=“height: auto; display: flex; margin-bottom: 24px;” attrs="{"postId":"3lltn7ctgx22f","authorDid":"did:plc:dki5xu3vgyo7ubl7vaw55zzq","authorName":"Internet Cryptid","authorHandle":"neutral.zone","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:dki5xu3vgyo7ubl7vaw55zzq/bafkreiatfsiaurf42wc47rtfm6tvkt7qujuzvcgjeil5htrq66f3a6e6pq@jpeg","text":"Not only is human-centered learning potentially no longer the axis, it might not even be the map anymore.\n\nIf a simulated epistemic process yields functionally indistinguishable outputs from a traditional one, then does knowledge becomes indistinguishable from simulation?","createdAt":"2025-04-02T14:51:41.555Z","uri":"at://did:plc:dki5xu3vgyo7ubl7vaw55zzq/app.bsky.feed.post/3lltn7ctgx22f","imageUrls":[]}" component-name=“BlueskyCreateBlueskyEmbed”} ::: iframe ::: {#app} ::: ::: :::
When I revisit German through Duolingo, the app reactivates deeply internalized language structures from past study. Duolingo surfaces forgotten skills—it acts like epistemic physical therapy, re-strengthening knowledge I'd once earned through genuine effort. Conversely, when I query LLMs, the model delivers external coherence, immediate but ephemeral. One reactivates my cognitive architecture; the other supplants it.
::: captioned-image-container

This shift parallels ideas I explored in "Thinking in and of Vectors." Knowledge increasingly points us toward frictionless answers—directions we effortlessly follow. But without friction, we risk losing epistemic resilience, the critical muscle memory required to think rigorously. Each frictionless answer diminishes our impulse—and ability—to independently validate truth. History shows clearly: whenever convenience supplants effort, subtle yet critical cognitive capabilities fade.
Years ago, in a pop culture and media theory class, I wrestled with similar ideas—the tension between authenticity and simulation. Revisiting those theories now, assisted by generative tools, feels strangely familiar. Like Duolingo with German, it reactivates something dormant but deeply embedded.
Prompting as the New Pedagogy
Prompting itself is becoming the new epistemic skill, the literacy of our age. Good prompts yield better answers; bad prompts lead nowhere. The process resembles classical Socratic dialogue—guiding toward insight through questioning—but with a critical difference: the partner has no genuine insight, only statistical patterns.
And yet this observation—true as it is—has become its own kind of performance. A knowing wink passed around those who think they're above the mystification but still rely on it. On one end, you get the techno-mystics treating prompt-craft like spellcasting. On the other, the critics who scoff and declare it all noise, all mimicry, all derivative drivel. Both flatten the thing.
The uncomfortable truth is: prompting works just well enough to confuse the issue. It's not intelligence, but it generates just enough of a facsimile that we start negotiating with it like it is. And that negotiation? That becomes a kind of thinking. A kind of epistemic posture. Not truth-seeking, exactly—more like resonance-tuning.
So yes, prompting mediates our interaction with AI. But that mediation isn't just technical—it's philosophical. It shifts how we ask, what we value in an answer, and whether we even remember how to think outside the model’s frame.
Consider how teachers, librarians, and textbooks have historically mediated knowledge. Prompting now similarly mediates our interactions with AI, shaping the quality and utility of the information received. Yet unlike those earlier mediators, the prompt engineer must constantly adapt to an opaque statistical model. This shift in agency parallels arguments I made in "Data Immiseration," where outsourcing cognition to AI mirrors outsourcing decisions to data analytics. Both subtly diminish personal agency, turning knowledge from an internalized strength into external dependency.
Deskilling, Authority, and Identity
Historically, identity and authority have been tightly coupled with mastery and expertise. Scholars, educators, and professionals derive legitimacy from their internalized, deeply earned wisdom. Now, instant simulated knowledge threatens to redefine epistemic authority—shifting legitimacy from internal mastery to external facilitation.
::: pullquote The shift in epistemic authority isn’t just about skill. It’s about legitimacy—who gets to speak, and why. Foucault’s “regimes of truth” didn’t disappear; they’re just retrained, fine-tuned, and deployed through API calls now. :::
Consider programmers transitioning from creators of original code to curators of AI-generated snippets. Or academics now confronting effortlessly generated essays challenging traditional scholarly rigor. This resonates strongly with my argument in "The Dream of Sovereign Compute is Dying." As we cede computational agency to cloud infrastructure, we similarly risk losing epistemic agency to models. Sovereignty—in computation and cognition alike—is slowly eroded, replaced by frictionless convenience.
This deskilling reshapes identities profoundly. What becomes of the thinker, the coder, the scholar, when knowing no longer demands a knower? A profound shift in our self-perception and professional identity looms, raising troubling questions about the future of intellectual work.
Holodecks and the Debugging of Reality
Simulation isn't inherently destructive. As Baudrillard would have it, we’ve entered a space where the simulation doesn’t just represent reality—it replaces it. The kung fu isn’t a symbol of mastery; it is mastery, so long as no one questions it.
Consider Star Trek's holodecks—typically depicted as entertainment or sources of chaos. Yet Geordi LaForge frequently leveraged them constructively, debugging complex systems by simulating reality to solve real-world problems. In detailed simulations, LaForge identified flaws, refined understanding, and developed actionable knowledge.
Generative AI can similarly serve as a cognitive debugging tool, helping us refine our epistemic processes rather than replace them. Instead of passive reliance, consciously integrating simulation can enhance critical thinking, preserve agency, and ensure that friction remains an essential element in authenticating knowledge.
A Conscious Epistemology—Towards Sovereign Knowledge
The path forward isn't rejection of generative tools—it’s conscious integration. Sovereignty demands awareness and discipline. We must intentionally maintain friction in our learning process. Solve difficult problems yourself, interrogate AI-generated answers critically, and treat external coherence as a starting point rather than an endpoint. Protecting intellectual autonomy requires vigilance.
::: pullquote The medium might be the message, but the platform is the censor—and we’re all just debugging reality in the margins. :::
Just as computational sovereignty demands awareness of where computation occurs, epistemic sovereignty requires consciousness of where knowledge originates. Convenience must never blind us to what's at stake: the gradual loss of our internal cognitive strength and identity. But it's not necessarily just loss. This may also be a phase shift—toward a more hypertextual self, distributed and interlinked, rather than monolithic and self-contained. We shouldn’t confuse transformation with erosion, even if the symptoms overlap. What’s at stake is whether we recognize the shape of what we're becoming.
Conclusion
Ultimately, Morpheus’s "Show me" remains the critical test. Knowledge must remain provable, actionable, authentic. Simulation alone is insufficient; without internal mastery, without epistemic agency, we risk becoming passive curators rather than active creators.
The real question isn’t whether we can instantly know kung fu.
It's whether, when challenged, we can genuinely demonstrate it.
Coda
Even in writing this—with help—I still nitpick. I still edit. I reframe. I rephrase. I add flavor. I make it more mine, even in the dark mirror reflecting my own thoughts back to me. That friction? That part is, at least, still very real.