| ▲ | scarmig 6 hours ago | |
> you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons" To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.) | ||
| ▲ | ChadNauseam 3 hours ago | parent | next [-] | |
They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons" Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today. | ||
| ▲ | gizzlon 2 hours ago | parent | prev [-] | |
> it isn't a moral stance so much as a pragmatic one Agreed, the moral stance is saying no to DoJ and the US government | ||