| ▲ | nomilk 8 hours ago | |||||||||||||||||||||||||
> Anthropic's two hard lines: > 1. No mass domestic surveillance of Americans > 2. No fully autonomous weapons (kill decisions without a human in the loop) Surveillance takes place with or without Anthropic, so depriving DoW of Anthropic models doesn't accomplish much (although it does annoy Hegseth). The models currently used in kill decisions are probably primitive image recognition (using neural nets). Consider a drone circling an area distinguishing civilians from soldiers (by looking for presence of rifles/rpgs). New AI models can improve identification, thus reducing false positives and increasing the number of actual adversaries targeted. Even though it sounds bad, it could have good outcomes. | ||||||||||||||||||||||||||
| ▲ | aldonius 8 hours ago | parent [-] | |||||||||||||||||||||||||
I thought Anthropic's take on #2 was they don't think the model's good enough yet? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||