Remix.run Logo
nilkn 5 hours ago

Anthropic specifically called out systems "that take humans out of the loop entirely and automate selecting and engaging targets".

I take that to mean they don't want the military using Claude to decide who to kill. As a hyperbolic yet frankly realistic example, they don't want Claude to make a mistake and direct the military to kill innocent children accidentally identified as narco-terrorists.

At least, that's the most charitable interpretation of everything going on. I suspect they are also worried that the sitting administration wants to use AI to help them execute a full autocratic takeover of the United States, so they're attempting to kill one of the world's most innovative companies to set an example and pressure other AI labs into letting their technology be used for such purposes.

5 hours ago | parent | next [-]
[deleted]
blhack 5 hours ago | parent | prev [-]

Right. Did the DoW ask for that? Or does Anthropic make a product that does that?

nilkn 5 hours ago | parent | next [-]

Obviously Anthropic does make a product that could do that -- just give Claude classified data and ask it who to target.

Obviously the military wants to use it for that purpose since they couldn't accept Anthropic's extremely limited terms.

One can easily and immediately infer the answers to both your questions are yes.

blhack 5 hours ago | parent [-]

The DoW has explicitly said they don’t want this, and what you are describing are not automated kill drones.

Anthropic’s safeguards already prevent what you are describing, again the thing thar DoW has said they don’t want.

nilkn 5 hours ago | parent [-]

I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).

Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.

blhack 5 hours ago | parent [-]

GOOD. I don’t want Anthropic, or anybody else to have their tools used for these things either.

But Dario is showing weakness here by talking around it. Whatever they were asked to do, they should just be upfront about.

adastra22 5 hours ago | parent | next [-]

> Whatever they were asked to do, they should just be upfront about.

Anthropic is not being asked to do anything, except renegotiate the contracts. The DoW Claude models run on government AWS. Anthropic has minimal access to these systems and does not see the classified data that is being ingested as prompts. It is very unlikely that Dario actually knows what the DoW wants to do with these models. But even if he did, it would be classified information that he is not at liberty to disclose.

However the product they provide likely has safety filters that cause some prompts to not be processed if it is violates the two contractual conditions. That is what the DoW wants removed.

nilkn 5 hours ago | parent | prev | next [-]

He didn't talk around it. He wrote down specifically what the two issues were, which is precisely why now the entire world knows what's actually going on. If risking your company's existence to prevent a (potential) atrocity is weakness, I don't know what strength is.

blhack 5 hours ago | parent [-]

Strength is saying what they were asked to do. I want to know!

Did the DoW ask them to make kill drones? Because if so THAT IS A REALLY BIG DEAL.

The vagueness is irritating. He’s saying they won’t do something, the DoW is saying they don’t even want them to do that, which should resolve the issue, but hasn’t. There is obviously something else at play here.

nilkn 5 hours ago | parent [-]

You're confused because you're taking everything the people involved are saying literally and trusting everything plainly at face value. The existence of the contradiction you're pointing out should be evidence that you need to think a level deeper, i.e., that you need to look at actions more than words. There's an incredibly easy resolution of the contradiction that is troubling you, and it's already been pointed out clearly above.

4 hours ago | parent | prev | next [-]
[deleted]
tosapple 2 hours ago | parent | prev [-]

[dead]

sigmar 5 hours ago | parent | prev | next [-]

https://x.com/SeanParnellASW/status/2027072228777734474?s=20

Here's the Chief Pentagon Spokesman pointing to the same verbiage and reiterating they they won't agree to those terms of use.

blhack 5 hours ago | parent [-]

The first sentence of that post is:

> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.

sigmar 5 hours ago | parent | next [-]

Saying something on twitter is not a guarantee.

Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."

blhack 5 hours ago | parent [-]

>he said this

>>no he didn’t he actually said the opposite of that and the link you just posted says the opposite of what you are claiming

>but he might change his mind!

Okay?

sigmar 5 hours ago | parent [-]

You asked repeatedly:

>Did the DoW ask for these things?

>Did the DoW ask for that?

I showed you where the spokeperson asked for the terms to change so they could make autonomous weapons. now, you're shifting the goal posts.

ImPostingOnHN 3 hours ago | parent | prev [-]

And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.

Is a pundit/politician lying to you a new experience?

ImPostingOnHN 5 hours ago | parent | prev | next [-]

The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.

If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.

blhack 5 hours ago | parent [-]

No, the DoW may be implicitly asking for those things.

That’s the point I’m trying to make here: Anthropic should just say the unsaid thing here.

DoW asked for the following thing: $foo. We won’t give that to them.

spankalee 4 hours ago | parent | next [-]

That thing is removing the restrictions from the contract.

ImPostingOnHN 3 hours ago | parent | prev [-]

> Anthropic should just say the unsaid thing here.

> DoW asked for the following thing: $foo. We won’t give that to them.

Anthropic has explicitly said that multiple times, including in the letter we are presently discussing.

$foo is the ability to use Claude for domestic mass surveillance and analysis, and/or fully-autonomous killbots.

mcphage 5 hours ago | parent | prev [-]

I certainly wouldn’t give them the benefit of the doubt.

blhack 5 hours ago | parent [-]

Then Anthropic should say: this is what the DoW has asked for, and we aren’t able to do it, or don’t want to.

mcphage 4 hours ago | parent [-]

They may not be legally allowed to.