Remix.run Logo
basch 5 hours ago

Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.

It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.

throw0101a 5 hours ago | parent | next [-]

> Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.

There are two possibilities:

> The government would likely argue that dropping the contractual restrictions doesn't change the product. Claude is the same model with the same weights and the same capabilities—the government just wants different contractual terms. […] Anthropic would likely argue the opposite: that its usage restrictions are part of what Claude is as a commercial service, and that Claude-without-guardrails is a product it doesn't offer to anyone. On this view, the government is asking for a new product, and the statute doesn't clearly authorize that.

and

> The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms. Here the characterization question seems easier: a retrained model looks much more like a new product than dropping contractual restrictions does. Admittedly, the government has a textual argument in its favor: the DPA's definitions of "services" include “development … of a critical critical technology item,” and the government could frame retraining Claude as exactly that. Whether courts would accept that framing, especially in light of the major questions doctrine, is another matter.

* https://www.lawfaremedia.org/article/what-the-defense-produc...

* https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

A more extreme situation: could the DPA be used to nationalize the model so the government has ownership, and then allow access to more amenable AI players?

simoncion 4 hours ago | parent [-]

There's a third possibility. Anthropic's management desires cover to remove limiters on some of its products for some of its customers. The Pentagon is more than happy to play the bad guy if it means that they get something that's even more useful to them than what they would have gotten otherwise.

"We made these compromises because national defense is really super important." has historically proven to be a really effective explanation for tech companies that want to abandon some of their previously-stated "nice and friendly" values in exchange for money.

natpalmer1776 3 hours ago | parent [-]

When I imagine a world with this scenario being the truth, I am less confused than when I imagine a world with the alternatives. I find this to be a fantastic and historically reliable (for me) heuristic.

That being said, I imagine it also factors into internal dialogue that allows those higher up to explain to the boots-on-the-ground researchers that "no you're not working for the military industrial complex, they're just stealing your work that was intended to feed the orphans!"

foogazi 4 hours ago | parent | prev | next [-]

> It isn't in private.

We don’t know this

EA-3167 4 hours ago | parent | prev [-]

The top line of the article gives a big old hint: Anthropic signed a contract with the “Killing people” part of the government and now they’re putting on a show. No contract, no leverage.

The only threat the Pentagon has is to terminate the contract.

basch 4 hours ago | parent [-]

Can they not invoke the defense act without the public spat? Gag order?

EA-3167 3 hours ago | parent [-]

Realistically that's an empty threat, especially with the mid-terms coming up and Trump's attention span. The real threat, the actionable one, is the loss of a $200 mil contract. I suspect that the result here will be some highly visible face-saving compromise for Anthropic that means very little.