Remix.run Logo
observationist 11 hours ago

As much as I agree with a lot of these principles, in principle, the crux of the fight is Anthropic feeling and behaving like they're entitled to be involved in things far beyond anything they're legally allowed to be, and the military leadership telling them, rightly, to take a hike and not let the door hit them on the way out.

Effective Altruism is a deeply silly, flawed, unserious, superficial way of engaging with the world if this, FTX, and shrimp welfare are the outcome of people putting it in action.

What Anthropic wants is to be able to go back and pontificate and sue a government if they determine that their terms of service have been violated. In order to enforce that, they wanted oversight, access, and to intervene if they felt it was being put to a purpose they disagreed with, namely surveillance or autonomous weapons/killing, etc.

As an AI platform, they can decide if they want the military to be able to use the software. I'm 1000% on board with this. They don't get to sit an Anthropic employee down and say "ok, now you watch these soldiers and make sure they follow the rules, and if they do anything wrong, you hit the big red button that shuts them down." They don't get to program a Claude oversight agent to do that, either. That messes with realtime operations. They don't get to go back and sue "ackshually, we looked at these logs and determined that you violated rule 102.3a in the contract, because one of the terrorists was participating from an IP address determined to reside in the continental US" or whatever.

Anthropic doesn't get to hold the US military accountable. It doesn't get to do oversight. It doesn't get to constrain its scope of operation, through legal threat or active intervention or contracts or otherwise.

Chain of command and rule of law constrain the US military. Congressional oversight and rule of law hold it accountable. A private contractor, no matter how noble or principled, doesn't get extra privileges.

Anthropic playing political games, advocating for unelected and unaccountable power to be granted a private corporation, is what got them designated a supply chain risk, and I can see the argument for it. Depending on how much effort they put in to hassling the government and pushing for their side, it remains to be seen whether the designation sticks.

And in principle, I also see the utility of being extremely heavy handed when slapping down a private company trying to make a power grab like that. Either through ignorance or incredible arrogance and entitlement, a private company and industry needs to learn their place in the grand scheme of things. Anthropic isn't special, their place is right alongside all the rest of we the people; they don't get extra privileges because they feel strongly that they're particularly right or righteous.

OpenAI effectively said "yeah, rule of law, thumbs up, sounds good." and took the $200B on the table. Anthropic was pushing for extra private oversight and accountability, and it doesn't matter if it was surveillance, autonomous weapons, or not eating babies - the particular rule doesn't matter, the precedent being set of private corporations getting a say, at all, beyond legal limits, is the point. No company gets to tell the US military what to do or what not do, or hold them accountable post-hoc, or constrain available options, because if they absolutely need to break a technicality for a good reason, when national security and defense is under consideration, a private company's rules and terms of service is the very last thing in the world that should be important to that discussion.

I'm a Snowden fan and absolutely want the global surveillance apparatus to vanish, and don't want an AI singleton dystopia, and I'm probably waaayyy more liberal and liberty minded than is reasonable, but even I can understand where this line in the sand is, and why it's there. I'd be shocked if Dario lasts the year as CEO, it's clear he's ill equipped for real world, adult decisions.