I started building for the web in 2013. In those early years, the hardest decisions I ever made weren't technical. They were about which clients to take, which features to ship, and occasionally — which revenue to walk away from when something didn't sit right.
Thirteen years on, I'm building IlanoShop, my own e-commerce platform for small businesses. The decisions are bigger now, but the shape of them is the same. Who do you partner with? What data do you collect and why? And when someone powerful wants you to cross a line you've drawn for yourself — what do you do?
Last week, Anthropic answered that question publicly, at a scale very few companies ever have to.
What Happened
The US Pentagon demanded that Anthropic — the company behind the Claude AI — agree to give the military unrestricted, "any lawful use" access to its tools. The reported sticking points were mass domestic surveillance and fully autonomous weapons systems. Anthropic refused.
The response from Washington was swift. Defence Secretary Pete Hegseth labelled Anthropic a "supply chain risk" — a designation that would prohibit any company working with the military from doing business with Anthropic. President Trump then ordered every federal agency to stop using Anthropic's technology, threatening the company with "major civil and criminal consequences" if it didn't comply during the wind-down period.
"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." — Anthropic, February 2026
The military contract at stake was worth $200 million. Anthropic's current valuation sits at $380 billion. A former DoD official told the BBC that the company "simply do not need the money" and appeared to have the upper hand. The legal basis for the government's threats, the same official added, was "extremely flimsy."
Anthropic said it would challenge any supply chain risk designation in court. As of writing, that fight is ongoing.
The Brand Play Nobody Planned
Here's the read most people are missing: this may be the most effective brand moment in AI history, and nobody engineered it.
The AI industry is crowded with companies making near-identical claims about being responsible, ethical, and human-centred. Every major player publishes values statements. Every CEO gives speeches about AI safety. The words are largely indistinguishable.
Then the pressure arrives, and you find out which of those statements were real.
Values are only worth something when they survive contact with something that costs you.
Consider the contrast in play here. OpenAI's Sam Altman publicly stated he had the same "red lines" as Anthropic — the same refusal to enable domestic surveillance or autonomous offensive weapons. He even sent a supportive note to his team. Then OpenAI reached a deal with the Department of Defense for use of its models on classified cloud networks.
Whatever you think of either decision, the market positioning outcome is clear. In a single news cycle, Anthropic became the AI company that held its line when the US government came knocking. That is not a claim their marketing team can manufacture. It happened in public, under maximum pressure, at real financial cost.
For enterprise buyers — particularly in Europe, where data sovereignty is a live regulatory concern — that is an extraordinarily compelling signal. For regulators watching the AI industry with increasing scrutiny, it's a data point. For customers choosing between platforms, it's a differentiator that can't be copied without being earned.
What It Means If You Build on Claude
If you're a developer or company building products on the Claude API, the practical picture is narrower than the headlines suggest. Anthropic confirmed that the supply chain risk designation — if it proceeds and survives legal challenge — only directly affects companies that simultaneously hold contracts with the US Department of Defense. For the vast majority of independent developers and SME-focused platforms, API access is unaffected.
That said, there's a second-order consideration worth thinking through. If you build B2B SaaS tools used by enterprise clients who hold government contracts, it's worth understanding whether their compliance obligations could flow downstream to you. A conversation worth having with your legal or compliance team if that applies.
But the more strategically important question, for anyone planning a multi-year build on Anthropic's infrastructure, is this: what kind of partner are they?
When you build on a platform, you're betting on that platform's commitments being durable. An Anthropic that capitulates to "any lawful use" without restriction is an Anthropic whose usage policies become unreliable. An Anthropic that holds its stated constraints publicly, under presidential pressure, is a platform whose commitments you can actually build around with confidence.
Predictability in a dependency is a feature. This week gave us evidence about how predictable Anthropic's commitments actually are.
The Founder Question Underneath All of It
Beneath the politics and the business analysis, there's a more fundamental story here — one that resonates with anyone building something with real stakes.
At some point in building something that matters, you'll face a version of the question Anthropic faced. Not from the White House — but from a client, a partner, an investor, a growth opportunity. Someone will offer you something real in exchange for crossing a line you've drawn for yourself.
I built IlanoShop around a belief that small businesses deserve transparent, fair tools — no hidden transaction fees, no shifting terms, no data practices designed against their interests. That belief has cost me potential partnership conversations. It will cost me more as the platform grows. But I've watched enough businesses compromise their founding principles for short-term revenue to know what happens downstream. The trust you lose is never worth the deal you gained.
The time to decide what you won't compromise is before someone's offering you something significant to compromise it for. By the time the contract is on the table, it's too late to figure out what you actually believe.
Anthropic appears to have known their answer before the pressure arrived. That's why, when it did, they could respond without hesitation.
Thirteen years of building has taught me that the companies — and the products — worth betting on long-term are almost always the ones whose values survive contact with a real test. Not a values statement. Not a press release. A test.
Anthropic just passed a very public one. Whatever happens next in Washington, that's worth noticing.