| ▲ | puppycodes 6 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
Help me understand the line Anthropic is drawing in the sand? Don't get me wrong i'm glad they are unwilling to do certain things... but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala. Their line not to cross seems absurdly thin? Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | gck1 5 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I don't understand the line as well. So its no to domestic surveillance, but all other countries are a fair game? How is this an ethical stand? What sort of mental gymnastics allow Anthropic to classify this as an ethical stance? To me all of this reads like "we don't trust our models enough yet to not cause domestic havoc, all other is fine, and we don't trust our models enough yet to not vibe-kill people". Key word being "yet". | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | xvector 6 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
The whole reason this is happening is because Anthropic looked into how Claude was used in the Maduro op and found it to violate the negotiated terms of service. Their hard lines are: - no usage of AI to commit murder WITHOUT a human in the loop - no usage of AI for domestic mass surveillance | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||