Remix.run Logo
Filligree 3 hours ago

In American law, companies have the choice of whether or not to do business with the government, outside of a few corner cases. There’s a process for forcing them, but it can’t just be because the leader says so.

In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.

This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!

Predictability is the whole point. Undermining it is how you destroy your own economy.

orochimaaru 3 hours ago | parent | next [-]

That is allegedly not what happened. Anthropic’s CEO was happy to grant waivers on a case by case basis.

The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.

They had another problem. If one of their contractors used Claude to engineer solutions contrary to Anthropic’s “manifesto” would Claude poison pill the code?

Basically Anthropic wanted the angels halo and the devils horns and the govt said pick one.

SpicyLemonZest 2 hours ago | parent [-]

> That is allegedly not what happened. Anthropic’s CEO was happy to grant waivers on a case by case basis. The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.

That's not what the presidential announcement blacklisting Anthropic said. It said they're being punished for trying to require that the military follow their terms of service.

orochimaaru 2 hours ago | parent [-]

That’s the other pov (from the govt angle) - https://www.businessinsider.com/pentagon-official-details-ho...

The media is usually flush with defending Anthropic. And yes - the supply chain risk label is too broad. But there is another side to the story and Anthropic isn’t an “innocent” as made out to be.

SpicyLemonZest 2 hours ago | parent [-]

I've heard this POV before, I just re-read it again, and I genuinely do not understand which part of it you think shows Anthropic is anything but innocent. To me it seems pretty clear: Emil Michael heard that Anthropic was asking questions about how their system was used, and he thinks that attitude is an unacceptable security risk. He won't accept the use of systems that were developed based on "their constitution, their culture, their people" or "their own policy preferences". Anyone who would ask such questions might sabotage military operations if they don't like the answers, he argues, and I believe that he genuinely believes this.

So he'll only accept systems developed by people who understand, as Sam Altman promised to, that the US military is not to be questioned.

orochimaaru an hour ago | parent [-]

My impression was that Dario was happy to grant case by case exceptions. But Emil did not want that. I mean why setup claude at DoW where the goal is surveillance and targeting (possibly autonomous).

pixl97 8 minutes ago | parent [-]

>happy to grant case by case exceptions

Which makes more sense, the world isn't a black and white place with clear abstractions.

Geezus_42 2 hours ago | parent | prev [-]

Sure, they have a "choice", except that no one turns done the kind of money the government has to offer, and if the company is public they are legally obligated to increase shareholder value.