Remix.run Logo
Quarrelsome 12 hours ago

> Anthropic’s enemies in the Pentagon, who had, months prior, convinced Trump that Anthropic was “woke” and should be banned for government use.

That people in government speak like this is utterly absurd. The quote from Donald Trump's follow up tweet on t'social is considerably worse.

This was all due to Antropic not wanting to take on a military contract, right? Or is it suggested its more to do with Mythos, but why would it be, if they never released it.

Natfan 12 hours ago | parent | next [-]

the people in the previous trump admin were perfectly willing to peddle a lie they didn't believe

the people in the current trump admin genuinely believe their own lies

randallsquared 11 hours ago | parent | prev | next [-]

> This was all due to Antropic not wanting to take on a military contract, right?

No, they already had a contract (since 2024, revisited/renewed by the Trump admin in mid-2025) which included military usage. That contract, though, had some language about what Claude couldn't be used for, ostensibly because Anthropic was nervous about accuracy in lethal contexts. Hegseth and others were unhappy with the restrictions and wanted to just redo the contract to remove them. Anthropic didn't want that, at least with current models. Then everything blew up. Zvi has some great writeups with more than you probably want to know.

Quarrelsome 4 hours ago | parent [-]

that is an disgusting way to treat a business partner, but given the people involved; i shouldn't be surprised.

skissane 12 hours ago | parent | prev [-]

You have to distinguish between political rhetoric (“woke”) and the substance of the dispute

The substance: traditionally, defense contracts don’t have clauses in them limiting what the military can do with the acquired technology. If Boeing or Lockheed Martin or Northrop Grumann sell a missile system to the Pentagon, they don’t try to impose contractual limits on who the Pentagon can fire the missiles at. Now, for some types of contracts - e.g. contracts to provide personnel - the Pentagon is used to contractual terms limiting uses - but not for hardware or software used in weapons systems / military planning / etc.

Along comes Anthropic, who argue AI is a fundamentally different technology, to which the old rules shouldn’t apply - they want contractual terms prohibiting certain uses (autonomous weapon systems without human in loop; domestic mass surveillance). The Biden admin buys the argument and agrees to those novel contractual terms. The Trump admin takes over and objects to them, demands they be renegotiated. I think it was primarily a matter of principle and power-“software vendors don’t get to tell us what we can and can’t do”-rather than some immediate plan to do things the contract prohibits.

OpenAI negotiated a contract which replicated those terms-but with the proviso that the terms only apply insofar as they reiterate existing legal limits. Anthropic was objecting to that as a meaningless fudge-“we promise not to do X if X is illegal” is very weak, especially when contracting with the government-Congress could change the law tomorrow, or the government’s lawyers could change their interpretation of it, or an appellate court decision could impose a new understanding of it.

throw1234567891 12 hours ago | parent | next [-]

> Congress could change the law tomorrow, or the government’s lawyers could change their interpretation of it, or an appellate court decision could impose a new understanding of it.

And then it becomes legal. It’s not an empty argument, it simply means “someone higher than you took an initiative”.

JumpCrisscross 12 hours ago | parent | prev | next [-]

> Anthropic, who argue AI is a fundamentally different technology

They’re arguing it’s a service. I think Aramark could refuse to contract to provide employees to the U.S. military for a campaign on Chicago.

skissane 11 hours ago | parent | next [-]

I think in practice contracts to provide civilian personnel to the Pentagon contain clauses limiting the nature and location of the work - the Pentagon can’t contract for a clerical assistant in DC and then demand they go to Iraq to provide physical security - it violates the nature of the agreed work and the agreed location.

But contracts for personnel generally don’t contain restrictions on use beyond that. If the clerical assistant for DC is asked to provide clerical help to a military planning team who are planning an assault on Chicago, they (and their employer) don’t have legal grounds to refuse. If you are contracted to provide clerical assistance to military planners, you can’t legally say “Baghdad is fine, but Chicago is a no”. Saying that is a breach of contract-unless the courts rule that planning the assault was itself illegal, and I doubt current SCOTUS majority would

jpadkins 11 hours ago | parent | prev [-]

legally and in practice, they cannot. Even considering the 4th amendment, in a time of war the military can commandeer a service as long as they are compensated.

acdha 10 hours ago | parent [-]

Where’s the legal declaration of war, precisely?

jpadkins 12 hours ago | parent | prev | next [-]

Congress passing a changed law, and it holding up in court is how it's supposed to work. The people's reps (specifics interpreted by the courts) should be the ones that set the standard on as a country what type of weapons systems we want to deploy vs. what is immoral. Precedent is nerve agent weapons, landmines, etc.

Honestly, Anthropic's stance feels like an oligarch stance. We have better morals than the American people, we will decide what weapons systems the military will use or not use.

It's perfectly understandable if they don't want to sell weapons to the government. That is a noble thing. But Anthropic wanted that DoW money and wanted to determine what is moral vs. not

bigyabai 12 hours ago | parent | prev [-]

> rather than some immediate plan to do things the contract prohibits.

It's not like any legally questionable kidnappings or bombing campaigns were being planned at the time, right?

skissane 11 hours ago | parent | next [-]

Those acts are allowed by Anthropic’s terms-they aren’t domestic mass surveillance, and (to the best of my knowledge) any AI targeting decisions were approved by a human in the loop.

Anthropic’s terms weren’t “don’t do anything illegal” they were “here are two highly specific things which you aren’t allowed to, whether they are legal or not”

jpadkins 11 hours ago | parent | prev [-]

do you really think the bombings and kidnappings are new as of 2024? You think what we have been doing in the middle east and Guantanamo bay since 2001 are moral?

bigyabai 11 hours ago | parent [-]

The only reason that I mention liability concerns is precisely because of Abu Ghraib, Snowden, et. al.