Remix.run Logo
layer8 8 hours ago

The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that. In both cases, the two parties can claim to agree on the principles, but when push comes to shove, who decides on whether the principles are violated differs.

remarkEon 7 hours ago | parent | next [-]

Seems Anthropic did not understand the questions they were asked. From the WaPo:

>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.

>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.

I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.

https://web.archive.org/web/20260227182412/https://www.washi...

retsibsi 8 minutes ago | parent | next [-]

Is there any reason at all to believe the account of the unnamed "defence official"? Whatever your position on this administration, you know that it lies like the rest of us breathe. With a denial from the other side and a lack of any actual evidence, why should I give it non-negligible credence?

lukan 7 hours ago | parent | prev | next [-]

"It’s the kind of situation where technological might and speed could be critical to detection and counterstrike"

Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.

wraptile 24 minutes ago | parent | prev | next [-]

> If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

I'm sorry but lol

quaunaut 7 hours ago | parent | prev [-]

Are you serious? This is the kind of thing you'd ask a clarifying question on and get information back immediately. Further, the huge overreaction from Hegseth shows this is a fundamental disagreement.

SpicyLemonZest 6 hours ago | parent [-]

The flip side of "Hegseth is an unqualified drunk", a position which I've always held and still maintain, is that he very well might crash out over nothing instead of asking clarifying questions or suggesting obvious compromises. This is the same guy who recalled the entire general staff to yell at them about the warrior mindset. Not an excuse for any of this, but I do think the precise nature of the badness matters.

pseudalopex 8 hours ago | parent | prev | next [-]

> The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that.

You learned this where?

layer8 7 hours ago | parent [-]

I’m reading between the lines of the involved parties’ various statements, but there’s also this: https://x.com/UnderSecretaryF/status/2027594072811098230

pseudalopex 7 hours ago | parent | next [-]

> I’m reading between the lines of the involved parties’ various statements

You should have said this.

> https://x.com/UnderSecretaryF/status/2027594072811098230

Thank you.

layer8 7 hours ago | parent [-]

It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those. And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.

nandomrumber 7 hours ago | parent | next [-]

From the referenced tweet;

who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.

Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

And the US government should have precisely none of that, regardless of whether they’re red or blue.

eecc 5 hours ago | parent | next [-]

And that’s where the authoritarian in you is shining through.

You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).

One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.

You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.

SMH

nullocator 6 hours ago | parent | prev [-]

> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

> And the US government should have precisely none of that, regardless of whether they’re red or blue.

This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please

I hope you A: aren't a U.S. citizen, and B: don't vote.

If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.

pseudalopex 7 hours ago | parent | prev [-]

> It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those.

No. Altman said human responsibility. Anthropic said human in the loop.

> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.

All but confirmed was not confirmed.

layer8 6 hours ago | parent [-]

I don’t understand your first comment. At that point, Altman’s tweet didn’t exist yet, and is immaterial to the reading of Anthropic’s and Hegseth’s statements.

To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.

We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.

intermerda 6 hours ago | parent | prev [-]

[dead]

outside1234 8 hours ago | parent | prev [-]

This. Sam is going to pretend they aren’t going to use it for that because his company is collapsing in losses. He will never audit.

Probably also got assurances about a bailout when OpenAI collapses.