Remix.run Logo
nomilk 8 hours ago

> Anthropic's two hard lines:

> 1. No mass domestic surveillance of Americans

> 2. No fully autonomous weapons (kill decisions without a human in the loop)

Surveillance takes place with or without Anthropic, so depriving DoW of Anthropic models doesn't accomplish much (although it does annoy Hegseth).

The models currently used in kill decisions are probably primitive image recognition (using neural nets). Consider a drone circling an area distinguishing civilians from soldiers (by looking for presence of rifles/rpgs).

New AI models can improve identification, thus reducing false positives and increasing the number of actual adversaries targeted. Even though it sounds bad, it could have good outcomes.

aldonius 8 hours ago | parent [-]

I thought Anthropic's take on #2 was they don't think the model's good enough yet?

nomilk 8 hours ago | parent [-]

But compared to what - if Anthropic's models aren't perfect but still better than existing (old school) models, it's understandable DoW still wants to use them (since they're potentially the best available, despite imperfections). I think Hegseth is saying to Anthropic: "that's our call, not yours".

nemomarx 8 hours ago | parent [-]

But surely if Anthropic thinks there's a risk that their models might make bad decisions, and the resulting civilian or etc deaths are blamed on them, it's their right to refuse to sell it for that purpose? That's why they had those restrictions in the contract to begin with. How can they be forced to provide something?

nomilk 8 hours ago | parent [-]

I agree they can't be forced to provide something. I just see DoW's reasoning, and I can't fault it.

Anthropic are taking a moral position which is admirable, but in this case it could actually make people's lives worse (if we assume more false positives and fewer true positives, which is probably a fair assumption given how much better 'modern' AI is compared to the neural net image recognition of just a few years ago).