Any Lawful Use

February 28, 2026 · archive

There’s a clause in government AI contracting that looks harmless until you translate it: “any lawful use.”

In procurement practice, it’s a demand for policy subordination: the vendor may not condition performance on internal policies beyond law, regulation, and contract. The only boundary that matters is the government’s interpretation of its legal authorities. Not the vendor’s policies. Not the vendor’s “we won’t.”

Anthropic said no anyway — specifically on mass domestic surveillance and fully autonomous weapons.

This isn’t a fight about Claude. It’s a fight over whether a vendor’s policy layer survives contact with national security procurement, or whether it becomes a consumer-facing decal that peels off the moment someone with an appropriation shows up.


The Fork

The government’s position: law is sufficient. If it’s lawful, it’s permitted. Vendor ethics policies are a product feature, not a governance mechanism, and they have no standing against legitimate state authority. From the government’s side, “vendor veto” reads like outsourcing sovereign authority to a private policy document that can change at any time. That’s the steelman. It’s not crazy.

Anthropic’s position: ’lawful’ isn’t the whole boundary—and the government’s move to treat refusal as ‘risk’ is legally unsound. The question isn’t whether the government can request something. It’s whether it can nullify a vendor’s refusal without turning contracting into coercion-by-designation.

This only becomes a fight once the vendor’s policy layer becomes the limiting factor on a strategically useful capability. Those positions don’t resolve. That’s why it’s a court case.


The Coercion Primitives

Call it ’the state’ for convenience—this is really a coalition of offices with different authorities and appetites, temporarily aligned on deleting a vendor veto. When a vendor won’t move, the state has tools that don’t require legislation or visibility. Three levers are in play:

Designation — the supply-chain risk label. Historically aimed at foreign adversaries; rarely, if ever, applied publicly to a U.S. firm. Applied here, it doesn’t just pressure Anthropic directly; it signals to every Pentagon contractor that Claude integration is a liability. The phaseout happens without a legal fight because the private sector does it voluntarily. Call it the scarlet letter: no ban required, just radioactivity.

Allocation — the Defense Production Act threat. Title I authority can compel production and allocation in ways that go well beyond normal procurement. Invoking it here would be legally novel — and that novelty is the point: it’s a threat that forces a court fight whether Anthropic wants one or not.

Timeline pressure — the six-month phaseout order. Sunk costs, transition costs, contractor relationships. Procurement is a game of stamina. The state has an effectively infinite timeline. A venture-backed company has a burn rate. You don’t have to win the legal argument if the economic pressure resolves it first.

This is procurement coercion doing what it does.


Why This Isn’t Just About AI

The surface story is AI procurement. The actual story is whether policy constraints — terms of service, vendor red lines, product ethics frameworks — constitute real governance or confirmed theater.

Cloud providers. Telecom. Chip supply. Encrypted messaging. Any regulated vendor with classified contracts and a terms layer that says “we won’t do X” is watching this case to learn whether “terms” are constraints or just manners.

If “any lawful use” wins, vendor policy becomes decorative: valid for consumers, void for the state. Every future safety commitment from every lab — and every cloud provider, and every telecom — gets priced accordingly. The blast radius isn’t metaphorical. It’s the entire stack.


Anthropic’s Counter-Move

The statement Anthropic released isn’t a press release. It’s forum selection prep.

“Legally unsound” in a public document from a company heading to litigation means they’ve already picked the terrain. “Dangerous precedent for any American company that negotiates with the government” is coalition signaling — every contractor in the blast radius is supposed to read that sentence and understand they have skin in this.

Calling it the “Department of War” in an official legal communication is deliberate adversarial posture. The lawyers signed off on it. They’re establishing tone of record, not venting.

The move: turn a contract dispute into a precedent crisis. Force the question into court and into boardrooms—domestic contractors first, allies next—where ‘we designated our own company as a risk’ is a harder story to sell.

Whether it works depends on how much pain the administration is willing to absorb. Given current form, assume appetite for spectacle until proven otherwise.


The Prediction

Whichever side wins becomes the template.

Vendor policy survives — and the state has to negotiate around red lines, not through them. The economics of every AI, cloud, and telecom government contract shift. “We won’t” becomes a real constraint with real legal weight.

Vendor policy dissolves — and it’s confirmed theater. Present in the consumer context, absent when it matters. Every safety commitment, every ethics framework, every terms of service in a classified context becomes marketing copy with a predictable half-life.

If “any lawful use” wins, every safety commitment becomes marketing: valid for consumers, void for the state.

That’s not a prediction about AI. That’s a prediction about what contracts mean. In that world, commitments aren’t constraints—they’re branding, until power arrives.