AI's Fair In Love & War
The biggest AI story of 2026 isn't about what the technology can do. It's about who gets to decide what it does.
The biggest AI story of 2026 isn't about what the technology can do. It's about who gets to decide what it does.
The biggest AI story of 2026 isn't about what the technology can do. It's about who gets to decide what it does; and whether you, the person using it, have any say in the matter.
Last Friday, the President ordered every federal agency to stop using Anthropic's technology. The Pentagon slapped the company with a supply chain risk designation, the same label they usually save for outfits like Huawei, and every military contractor in America is now forbidden from doing business with them. The reason? Anthropic refused to remove two safeguards from Claude, their AI model: no mass surveillance of American citizens, and no autonomous weapons without a human in the loop. Two guardrails; two hundred million dollars walked out the door.
You could read this as a government-versus-corporation standoff and move on with your week. But there's something underneath the headline that matters a lot more to anyone who actually uses these tools.
The Pentagon's position boils down to a simple idea: once we buy a tool, we get to use it however we want. Anthropic's counter is that "however we want" is a blank cheque when nobody's written the laws yet; and that the tool itself isn't reliable enough for some of the things they want to do with it. Both sides are making a claim about agency. The Pentagon wants agency over a tool it purchased. Anthropic is asserting agency over a tool it built. And somewhere in the middle, every person and business running Claude just discovered that the tool they depend on can become a political football overnight.
That's the part worth paying attention to.
Dario Amodei, Anthropic's CEO, put it plainly: "We cannot in good conscience accede to their request." The Pentagon's undersecretary called him a liar with a God complex. The President called the company "radical left woke nutjobs." Elon Musk, whose own company xAI had just signed an unrestricted deal with the very same Pentagon, said Anthropic "hates Western civilization." A normal Friday in 2026.
But here's the thing: Claude is the only AI model currently running on the Pentagon's classified networks. Anthropic says eight of the ten largest companies in America use it. The supply chain risk label means every one of those companies that touches defence work now has to certify they've cut ties with Anthropic; or stop doing defence work entirely. That's not a slap on the wrist. That's a structural attempt to make one company radioactive across the entire economy because it held a position on how its own tool should be used.
If you're a small business operator building on AI tools, this is your wake-up call on two fronts.
The first is dependency. If Anthropic's Claude disappeared from your stack tomorrow, what breaks? The six-month phase-out period tells you everything you need to know about how hard it is to untangle from a tool you've built your workflow around; defence officials privately admitted it would be a "huge pain in the ass." That's not just a Pentagon problem. That's anyone who's gone all-in on a single platform without a fallback plan.
The second is alignment. The tools you choose say something about what you're willing to accept. Every AI platform comes with its own set of rules, its own terms of service, its own line in the sand about what it will and won't do. Most of us click "agree" without reading. But the Anthropic-Pentagon fight is fundamentally a terms-of-service dispute; the restrictions a provider puts on their tool determine what you can and can't build on top of it. Whether you're auditing a setup you've been running for years or choosing your first AI tools right now, the question is the same: does this tool's philosophy line up with yours? And if it doesn't, what are you actually agreeing to?
The most interesting thing that happened this week wasn't the ban itself. It was the response. Sam Altman told his staff that OpenAI would push for the same restrictions. Over a hundred Google employees sent a letter demanding similar limits on Gemini. Workers at Microsoft and Amazon followed within hours. The Pentagon tried to make an example of one company and accidentally unified the entire industry's workforce around the same two guardrails. You love to see it.
There's a distribution lesson in that, too. Anthropic's product didn't change; not a single line of code. But by taking a public stand, they went from "one of several AI providers" to "the AI company that drew a line." Every engineer choosing where to work, every enterprise buyer evaluating vendors, every government outside the US looking for a partner that won't compromise on these things; they all noticed. Dario's play looks like principle. It might also be the sharpest positioning move in the industry this year. Both things can be true.
I keep coming back to something Dan Koe wrote about identity and behaviour: you don't change what you do by willpower; you change it by changing who you are, and the behaviour follows. That applies to companies, and it applies to you. Anthropic decided who it is in a way that cost them nine figures. You probably won't face that exact situation. But every time you choose a tool, sign a contract, or agree to a set of terms, you're making a smaller version of the same decision. The question isn't just "does this tool work?" It's "does this tool work the way I think things should work?"
Don't build on a single dependency. Understand your tools well enough to switch when you need to. Make sure your setup reflects your principles, not just your convenience. And pay attention to who controls the roll; because that's who controls the music.
List every AI tool in your workflow and answer one question for each: if this disappeared in six months, what's my fallback? Not because Anthropic is going anywhere; but because the lesson applies to every platform you rely on. Whether you're deeply embedded or just getting started, now is the time to know where you stand.
The Anthropic-Pentagon fight is, at its core, a terms-of-service dispute. The restrictions a provider puts on their tool determine what you can and can't build on top of it. If you're building anything that matters on an AI platform, know what you signed up for; and know what the provider decided on your behalf.
Whether you're evaluating a new tool or reconsidering one you already use, ask yourself: does this platform's philosophy match mine? Does the company behind it make decisions I can respect? Anthropic just showed what it looks like when a company's actions match its stated values. That's a useful measuring stick for every tool in your stack.
The most balanced reporting on the ban itself. The detail that Emil Michael was still on the phone offering a deal while Hegseth tweeted the supply chain designation is worth sitting with.
Dario pointed out the logical problem weeks ago: you can't invoke the Defence Production Act (which requires the product to be essential to national security) and simultaneously designate the company a supply chain risk (which implies it's a threat). Both can't be true.
Buried in the final paragraphs: OpenAI pushing for the same restrictions, Google employees demanding limits, Microsoft and Amazon workers following. The Pentagon wanted to isolate Anthropic. Instead, it unified the industry around the same two guardrails.
Dan Koe's argument: you don't change behaviour by willpower; you change it by changing identity, and the behaviour follows. Applied to companies rather than people, it explains why Anthropic could walk away from $200 million without hesitating.
Player Piano Weekly is our publication. But we also build custom AI systems for businesses. If you're ready to automate, let's talk.
Book a Discovery Call