The Forest Has Wolves
This is Part I of an ongoing series exploring the emerging philosophical terrain of generative technologies. Taken together, these essays form a generative ontology: a framework for understanding how systems like LLMs don’t just reflect our world, but actively reshape what’s legible, contestable, and possible. This first piece maps the strategic ground and argues for adversarial engagement over ethical abstention.
Note: This is exploratory, not final. A provocation, not a prescription. I'm sketching a philosophical frame in motion—what it means to think through technology that thinks back. If it feels like it’s running hot, that’s because it is. Feedback welcome.
Technology isn’t neutral. It encodes values, biases, and power dynamics—and pretending otherwise isn’t just naive, it’s dangerous. This is for those who still think you can opt out of power and remain untouched by it.
TL;DR for Activists
Tech isn’t neutral: Every protocol encodes values. Disengagement is surrender.
Decentralized systems fracture: Federation fragments into echo chambers if left unguarded.
Generative AI is being weaponized: Misinformation, bias, and suppression aren’t glitches—they’re design decisions.
Abstention doesn’t protect you: Refusing to engage means forfeiting the field to bad actors.
Tactics exist: Adversarial pressure, data poisoning, watchdog automation, and red-teaming all work.
Build adversarial systems: Use the tools against their makers. Erode the defaults. Automate dissent.
Picture a Mastodon instance slowly radicalized via defederation (cutting off connections to other servers). Or an AI system that won’t generate pro-union content but happily promotes anti-worker sentiment—not by accident, but by design. These aren’t hypotheticals; they’re signals. Refusing to act won’t stop escalation. Right now, we’re facing two distinct but interwoven technological fronts: decentralized protocols like ATproto (the protocol behind Bluesky), and generative AI systems like large language models (LLMs). The former struggles with fragmentation; the latter is rewriting the terrain entirely. If generative AI is the new weapon, decentralized platforms are the battlefield—and both are already compromised. Between them lies a common delusion: that disengagement is defensible, or that the terrain isn’t already hostile.
On both fronts, there’s a temptation toward what we might call ethical abstention—a refusal to engage with systems viewed as ideologically compromised. Call it principled refusal if you like. But history doesn’t remember the principled who brought ethics to a drone war—only the aftermath. These tools are already deployed against you. Abstention isn’t purity—it’s unilateral surrender. The question is whether they can be subverted, redirected, or at least contested.
To clarify:
Technology is agency. It’s built by humans and reflects human goals, ambitions, and biases. A protocol, no matter how neutral it seems at inception, will shape and constrain behavior as it scales. Protocols become moral frameworks. They set defaults. They define what’s legible or invisible. Possibility space isn’t neutral—it’s political. Design decisions shape outcomes: what’s easy or hard, encouraged or punished. That’s ideological conflict, not abstraction. Pretending otherwise doesn’t make a system apolitical—it makes it unaccountable.
Platforms shape collective outcomes. Yes, the Ising model (a physics metaphor for how small pressures can trigger sudden shifts in collective behavior—like a crowd turning into a mob, or how peer pressure triggers group flips) is useful. People flip states under social pressure. But unlike physical systems, tech architectures are designed. Their rules are synthetic. They can be changed. Structural pressure is real, but not inevitable. There's a critical window to act before those structures calcify into something genuinely immovable.
Generative AI is an accelerant. Misinformation, consensus-shaping, reality denial—they’re already being supercharged by LLMs. Ethical concerns are valid: amplification of harm, environmental cost, bias. But abstaining isn’t neutrality. It’s disarmament. Your ideological opponents aren’t hesitating to weaponize these tools. Neither should you. This doesn’t mean endorsing them. It means refusing to leave the field uncontested. Critics may say this stance risks normalizing harm. Fair. Purists argue that engagement legitimizes harmful systems—and they’re not wrong. But legitimacy is moot when the war is already lost by default. Using tools tactically to disrupt adversaries isn’t the same as adopting their playbook. It’s contesting the terrain. It’s refusing to let them write the ending.
At scale, neutrality dies. ATproto and similar protocols promise better dynamics via portability and federation (a model where users and data are spread across independently operated servers). And yes, federation has benefits: resistance to centralized control, flexibility, autonomy. But it also fragments. It isolates. It enables echo chambers and adversarial capture. These aren’t liberatory enclaves—they’re engineered tribalism. Federation doesn’t inherently protect against exploitation. Pretending otherwise is dangerous naivety. A forest doesn’t stay neutral when wolves arrive. Microblogging isn’t infrastructure—it’s microplastic. Tiny, persistent, and everywhere. Your clean feed won’t stop the AI trained on your data from poisoning the discourse elsewhere.
Sanctuaries matter. But birds still migrate through poisoned skies. A clean pocket doesn’t undo environmental collapse—it highlights its limits. DDT gave us fewer birds. PFAS gives us forever damage. You don’t need total collapse—just enough ambient contamination to make purity a myth. You need both the local fix and the systemic fight.
The knee-jerk Butlerian Jihad stance—blanket opposition to AI—is emotionally understandable and strategically useless. It comes from real fear: of surveillance, collapse, manipulation. But fear doesn’t negate terrain. It just leaves you unarmed in it. The battlefield’s here. The other side didn’t show up with clean hands.
Federation Fractures—and What Comes Next
Moral high ground isn’t cover. Decrying AI while reactionary grifters, state actors, and capital automate propaganda and disinformation is like bringing a manifesto to a botnet. The clean-hands crowd doesn’t lose by accident—they lose by design. The field doesn’t stay empty just because you walked away.
Federation won’t save you. Bluesky’s ATproto offers the illusion that you can architect away toxicity. You can’t. Adversarial users exploit federation’s seams. Mastodon’s defederation isolates and radicalizes. AI-native platforms will move faster than any moderation can track. That doesn’t mean federation is doomed. But you can’t ignore the fact that adversaries are already shaping the terrain with tools you refuse to touch.
The weaponization gap. While some campaign for bans, others deploy swarms—harassing journalists, flooding regulators, drowning dissent in synthetic noise. Refusing to engage ensures you’ll be outgunned. The asymmetry is real. But that doesn’t justify inaction—it demands smarter, more tactical countermeasures.
We’re already in the first wave: misinformation, deepfake propaganda, automated harassment. The second wave will be worse—softer, subtler, more insidious. Cognitive DRM. Deep soft censorship. Algorithmic inertia. ChatGPT refusing to criticize its investors. Midjourney censoring “union” but not “scab.” Not bugs—designs.
Second-Wave Weaponizations
Cognitive DRM – Outputs restricted to approved ideological frameworks.
Deep Soft Censorship – Nudging discourse into sanitized boundaries.
Algorithmic Inertia – Generative defaults reinforcing conventional outputs.
Echo Chamber Amplification – Homogenizing thought through repetition.
Neural Norm Enforcement – Penalizing deviation; enforcing soft orthodoxy.
Latent Space Lockdown – Constraining model internals to narrow output. ("Latent space": a model’s internal map of concepts.)
Semantic Chokeholds – Strangling conceptual diversity with safe vocabularies.
Procrustean Editing – Force-fitting outputs into rigid narrative forms.
These aren’t science fiction. They’re here. They’re calcifying fast. Abstention won’t stop them. Tactical engagement—disruption, subversion, stress-testing—isn’t just strategy. It’s survival.
Tactical Resistance: How to Fight Back
Critics worry about moral hazard, normalization, reinforcing power. They’re right to worry. But strategic refusal has already failed: activist groups that avoided secure infrastructure got outmaneuvered. Far-right communities used defederation to radicalize unopposed. Inaction creates vulnerability.
Real resistance is collective. Tactical. Coordinated.
Examples:
When researchers flooded GPT-4 with absurd or adversarial prompts to surface hidden biases, OpenAI was forced to patch flaws. That’s adversarial pressure.
Data poisoning efforts (intentionally injecting flawed or misleading data into training pipelines) can degrade output quality and expose weak points.
Red teams—security or research groups that stress-test systems by simulating attacks—probe moderation boundaries to reveal blind spots.
Watchdog groups using open-weight LLMs (models with publicly accessible training parameters) can automate FOIA generation, regulatory filings, or document harmful outputs at scale.
These are small victories. But they’re proof of concept. Resistance is possible.
Some will argue: engaging with broken systems legitimizes them. That visibility equals complicity. This isn’t about indiscriminate harm—it’s about precision strikes against systems already weaponizing scale against the vulnerable. But resistance has never been about clean hands. Abolitionists fought slavery in pro-slavery courts. Whistleblowers use corporate platforms to expose rot. Legitimacy doesn’t come from withdrawal—it comes from impact.
Toward an Adversarial Tech Ethic
Adversarial building isn't just sabotage—it's engineering with opposition in mind. It means designing systems that anticipate exploitation, degrade monopolies, and reward divergence. It means refusing elegance when friction works better. Think federated platforms with automated counter-radicalization checks, or LLM wrappers that default to transparency rather than trust.
What might a tactical manifesto for resistance look like?
Never assume system neutrality.
Exploit design weaknesses. Don’t worship cleverness—weaponize it.
Use surveillance infrastructure against itself.
Poison training data when ethical means fail.
Document harm relentlessly.
Automate counterpressure.
Never trust the default settings.
Assume nothing will stay small.
Build tools that erode advantage—not just replicate it.
Don’t wait for permission. Intervene.
We don’t need a perfect system. We need a playbook that works under fire.
Burn Illusions—But Leave a Light On
Abstention won’t save you. Ideology without tactics is theater—and the audience is already walking out. But tactics without vision become sabotage without strategy. We need both. The future isn’t awarded to the pure. It’s taken—tools in hand, eyes open, and illusions burned away.
But even after the illusions are ash, what remains matters. What’s built in their place matters. If you can’t imagine a better system, you can’t subvert the one you’re in.
Build. Break. Repeat. Just don’t do nothing.
You can’t federate your way out of ideological capture, and you can’t prompt-engineer your way past structural rot. The terrain is compromised—tactics must be, too. The only way out is through—and the only ’through’ is collective sabotage.
This forest doesn’t need your purity. It needs your teeth.